00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 947 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3609 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.159 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.160 The recommended git tool is: git 00:00:00.160 using credential 00000000-0000-0000-0000-000000000002 00:00:00.162 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.212 Fetching changes from the remote Git repository 00:00:00.213 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.259 Using shallow fetch with depth 1 00:00:00.259 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.259 > git --version # timeout=10 00:00:00.297 > git --version # 'git version 2.39.2' 00:00:00.297 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.316 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.316 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.083 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.094 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.107 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.107 > git config core.sparsecheckout # timeout=10 00:00:07.119 > git read-tree -mu HEAD # timeout=10 00:00:07.136 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.154 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.154 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.263 [Pipeline] Start of Pipeline 00:00:07.275 [Pipeline] library 00:00:07.276 Loading library shm_lib@master 00:00:07.276 Library shm_lib@master is cached. Copying from home. 00:00:07.290 [Pipeline] node 00:00:07.301 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.302 [Pipeline] { 00:00:07.312 [Pipeline] catchError 00:00:07.313 [Pipeline] { 00:00:07.331 [Pipeline] wrap 00:00:07.339 [Pipeline] { 00:00:07.346 [Pipeline] stage 00:00:07.347 [Pipeline] { (Prologue) 00:00:07.555 [Pipeline] sh 00:00:07.840 + logger -p user.info -t JENKINS-CI 00:00:07.858 [Pipeline] echo 00:00:07.860 Node: GP11 00:00:07.867 [Pipeline] sh 00:00:08.178 [Pipeline] setCustomBuildProperty 00:00:08.188 [Pipeline] echo 00:00:08.189 Cleanup processes 00:00:08.192 [Pipeline] sh 00:00:08.475 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.475 418048 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.489 [Pipeline] sh 00:00:08.777 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.777 ++ awk '{print $1}' 00:00:08.777 ++ grep -v 'sudo pgrep' 00:00:08.777 + sudo kill -9 00:00:08.777 + true 00:00:08.795 [Pipeline] cleanWs 00:00:08.806 [WS-CLEANUP] Deleting project workspace... 00:00:08.806 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.814 [WS-CLEANUP] done 00:00:08.818 [Pipeline] setCustomBuildProperty 00:00:08.834 [Pipeline] sh 00:00:09.119 + sudo git config --global --replace-all safe.directory '*' 00:00:09.214 [Pipeline] httpRequest 00:00:10.153 [Pipeline] echo 00:00:10.155 Sorcerer 10.211.164.101 is alive 00:00:10.165 [Pipeline] retry 00:00:10.167 [Pipeline] { 00:00:10.181 [Pipeline] httpRequest 00:00:10.186 HttpMethod: GET 00:00:10.186 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.187 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.226 Response Code: HTTP/1.1 200 OK 00:00:11.226 Success: Status code 200 is in the accepted range: 200,404 00:00:11.227 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:31.792 [Pipeline] } 00:00:31.809 [Pipeline] // retry 00:00:31.817 [Pipeline] sh 00:00:32.099 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:32.116 [Pipeline] httpRequest 00:00:32.532 [Pipeline] echo 00:00:32.534 Sorcerer 10.211.164.101 is alive 00:00:32.545 [Pipeline] retry 00:00:32.547 [Pipeline] { 00:00:32.562 [Pipeline] httpRequest 00:00:32.566 HttpMethod: GET 00:00:32.567 URL: http://10.211.164.101/packages/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:00:32.568 Sending request to url: http://10.211.164.101/packages/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:00:32.593 Response Code: HTTP/1.1 200 OK 00:00:32.593 Success: Status code 200 is in the accepted range: 200,404 00:00:32.593 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:02:37.943 [Pipeline] } 00:02:37.961 [Pipeline] // retry 00:02:37.969 [Pipeline] sh 00:02:38.263 + tar --no-same-owner -xf spdk_f220d590c6819ff8422b3dca9f8a36dc26cf9429.tar.gz 00:02:41.568 [Pipeline] sh 00:02:41.858 + git -C spdk log --oneline -n5 00:02:41.859 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:02:41.859 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:02:41.859 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:02:41.859 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:02:41.859 427304da7 lib/reduce: Reset req->reduce_errno 00:02:41.876 [Pipeline] withCredentials 00:02:41.889 > git --version # timeout=10 00:02:41.902 > git --version # 'git version 2.39.2' 00:02:41.931 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:41.933 [Pipeline] { 00:02:41.941 [Pipeline] retry 00:02:41.943 [Pipeline] { 00:02:41.958 [Pipeline] sh 00:02:42.508 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:42.783 [Pipeline] } 00:02:42.802 [Pipeline] // retry 00:02:42.807 [Pipeline] } 00:02:42.821 [Pipeline] // withCredentials 00:02:42.830 [Pipeline] httpRequest 00:02:43.414 [Pipeline] echo 00:02:43.416 Sorcerer 10.211.164.101 is alive 00:02:43.426 [Pipeline] retry 00:02:43.428 [Pipeline] { 00:02:43.443 [Pipeline] httpRequest 00:02:43.448 HttpMethod: GET 00:02:43.449 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:43.450 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:43.458 Response Code: HTTP/1.1 200 OK 00:02:43.458 Success: Status code 200 is in the accepted range: 200,404 00:02:43.459 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:26.349 [Pipeline] } 00:03:26.365 [Pipeline] // retry 00:03:26.373 [Pipeline] sh 00:03:26.676 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:03:28.601 [Pipeline] sh 00:03:28.895 + git -C dpdk log --oneline -n5 00:03:28.895 eeb0605f11 version: 23.11.0 00:03:28.895 238778122a doc: update release notes for 23.11 00:03:28.895 46aa6b3cfc doc: fix description of RSS features 00:03:28.895 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:28.895 7e421ae345 devtools: support skipping forbid rule check 00:03:28.907 [Pipeline] } 00:03:28.922 [Pipeline] // stage 00:03:28.932 [Pipeline] stage 00:03:28.934 [Pipeline] { (Prepare) 00:03:28.957 [Pipeline] writeFile 00:03:28.973 [Pipeline] sh 00:03:29.266 + logger -p user.info -t JENKINS-CI 00:03:29.281 [Pipeline] sh 00:03:29.573 + logger -p user.info -t JENKINS-CI 00:03:29.587 [Pipeline] sh 00:03:29.881 + cat autorun-spdk.conf 00:03:29.881 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:29.881 SPDK_TEST_NVMF=1 00:03:29.881 SPDK_TEST_NVME_CLI=1 00:03:29.881 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:29.882 SPDK_TEST_NVMF_NICS=e810 00:03:29.882 SPDK_TEST_VFIOUSER=1 00:03:29.882 SPDK_RUN_UBSAN=1 00:03:29.882 NET_TYPE=phy 00:03:29.882 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:29.882 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:29.891 RUN_NIGHTLY=1 00:03:29.896 [Pipeline] readFile 00:03:29.926 [Pipeline] withEnv 00:03:29.928 [Pipeline] { 00:03:29.941 [Pipeline] sh 00:03:30.235 + set -ex 00:03:30.235 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:30.235 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:30.235 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:30.235 ++ SPDK_TEST_NVMF=1 00:03:30.235 ++ SPDK_TEST_NVME_CLI=1 00:03:30.235 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:30.235 ++ SPDK_TEST_NVMF_NICS=e810 00:03:30.235 ++ SPDK_TEST_VFIOUSER=1 00:03:30.235 ++ SPDK_RUN_UBSAN=1 00:03:30.235 ++ NET_TYPE=phy 00:03:30.235 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:30.235 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:30.235 ++ RUN_NIGHTLY=1 00:03:30.235 + case $SPDK_TEST_NVMF_NICS in 00:03:30.235 + DRIVERS=ice 00:03:30.235 + [[ tcp == \r\d\m\a ]] 00:03:30.235 + [[ -n ice ]] 00:03:30.235 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:30.235 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:30.235 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:30.235 rmmod: ERROR: Module irdma is not currently loaded 00:03:30.235 rmmod: ERROR: Module i40iw is not currently loaded 00:03:30.235 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:30.235 + true 00:03:30.235 + for D in $DRIVERS 00:03:30.235 + sudo modprobe ice 00:03:30.235 + exit 0 00:03:30.246 [Pipeline] } 00:03:30.261 [Pipeline] // withEnv 00:03:30.266 [Pipeline] } 00:03:30.280 [Pipeline] // stage 00:03:30.292 [Pipeline] catchError 00:03:30.294 [Pipeline] { 00:03:30.309 [Pipeline] timeout 00:03:30.309 Timeout set to expire in 1 hr 0 min 00:03:30.311 [Pipeline] { 00:03:30.326 [Pipeline] stage 00:03:30.328 [Pipeline] { (Tests) 00:03:30.343 [Pipeline] sh 00:03:30.638 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.638 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.638 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.638 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:30.638 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:30.638 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:30.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:30.638 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:30.638 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:30.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:30.638 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:30.638 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:30.638 + source /etc/os-release 00:03:30.638 ++ NAME='Fedora Linux' 00:03:30.638 ++ VERSION='39 (Cloud Edition)' 00:03:30.638 ++ ID=fedora 00:03:30.638 ++ VERSION_ID=39 00:03:30.638 ++ VERSION_CODENAME= 00:03:30.638 ++ PLATFORM_ID=platform:f39 00:03:30.638 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:30.638 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:30.638 ++ LOGO=fedora-logo-icon 00:03:30.638 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:30.638 ++ HOME_URL=https://fedoraproject.org/ 00:03:30.638 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:30.638 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:30.638 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:30.638 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:30.638 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:30.638 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:30.638 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:30.638 ++ SUPPORT_END=2024-11-12 00:03:30.638 ++ VARIANT='Cloud Edition' 00:03:30.638 ++ VARIANT_ID=cloud 00:03:30.638 + uname -a 00:03:30.638 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:30.638 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.578 Hugepages 00:03:31.578 node hugesize free / total 00:03:31.578 node0 1048576kB 0 / 0 00:03:31.578 node0 2048kB 0 / 0 00:03:31.578 node1 1048576kB 0 / 0 00:03:31.578 node1 2048kB 0 / 0 00:03:31.578 00:03:31.578 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.578 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:31.578 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:31.578 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:31.578 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:31.578 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:31.579 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:31.579 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:31.579 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:31.579 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:31.579 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:31.839 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:31.839 + rm -f /tmp/spdk-ld-path 00:03:31.839 + source autorun-spdk.conf 00:03:31.839 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.839 ++ SPDK_TEST_NVMF=1 00:03:31.839 ++ SPDK_TEST_NVME_CLI=1 00:03:31.839 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:31.839 ++ SPDK_TEST_NVMF_NICS=e810 00:03:31.839 ++ SPDK_TEST_VFIOUSER=1 00:03:31.839 ++ SPDK_RUN_UBSAN=1 00:03:31.839 ++ NET_TYPE=phy 00:03:31.839 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:31.839 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:31.839 ++ RUN_NIGHTLY=1 00:03:31.839 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:31.839 + [[ -n '' ]] 00:03:31.839 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.839 + for M in /var/spdk/build-*-manifest.txt 00:03:31.839 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:31.839 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:31.839 + for M in /var/spdk/build-*-manifest.txt 00:03:31.839 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:31.839 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:31.839 + for M in /var/spdk/build-*-manifest.txt 00:03:31.839 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:31.839 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:31.839 ++ uname 00:03:31.839 + [[ Linux == \L\i\n\u\x ]] 00:03:31.839 + sudo dmesg -T 00:03:31.839 + sudo dmesg --clear 00:03:31.839 + dmesg_pid=419463 00:03:31.839 + [[ Fedora Linux == FreeBSD ]] 00:03:31.839 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:31.839 + sudo dmesg -Tw 00:03:31.839 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:31.839 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:31.839 + [[ -x /usr/src/fio-static/fio ]] 00:03:31.839 + export FIO_BIN=/usr/src/fio-static/fio 00:03:31.839 + FIO_BIN=/usr/src/fio-static/fio 00:03:31.839 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:31.839 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:31.839 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:31.839 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:31.839 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:31.839 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:31.839 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:31.839 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:31.839 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:31.839 12:18:01 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:31.839 12:18:01 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:31.839 12:18:01 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:03:31.839 12:18:01 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:31.839 12:18:01 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:31.839 12:18:01 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:31.839 12:18:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.839 12:18:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:31.839 12:18:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:31.839 12:18:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.839 12:18:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.840 12:18:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.840 12:18:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.840 12:18:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.840 12:18:01 -- paths/export.sh@5 -- $ export PATH 00:03:31.840 12:18:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.840 12:18:01 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:31.840 12:18:01 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:31.840 12:18:01 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730805481.XXXXXX 00:03:31.840 12:18:01 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730805481.AyQZCG 00:03:31.840 12:18:01 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:31.840 12:18:01 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:03:31.840 12:18:01 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:31.840 12:18:01 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:03:31.840 12:18:01 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:31.840 12:18:01 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:31.840 12:18:01 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:31.840 12:18:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:31.840 12:18:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.840 12:18:01 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:03:31.840 12:18:01 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:31.840 12:18:01 -- pm/common@17 -- $ local monitor 00:03:31.840 12:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.840 12:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.840 12:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.840 12:18:01 -- pm/common@21 -- $ date +%s 00:03:31.840 12:18:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.840 12:18:01 -- pm/common@21 -- $ date +%s 00:03:31.840 12:18:01 -- pm/common@25 -- $ sleep 1 00:03:31.840 12:18:01 -- pm/common@21 -- $ date +%s 00:03:31.840 12:18:01 -- pm/common@21 -- $ date +%s 00:03:31.840 12:18:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730805481 00:03:31.840 12:18:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730805481 00:03:31.840 12:18:01 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730805481 00:03:31.840 12:18:01 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730805481 00:03:32.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730805481_collect-vmstat.pm.log 00:03:32.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730805481_collect-cpu-load.pm.log 00:03:32.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730805481_collect-cpu-temp.pm.log 00:03:32.099 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730805481_collect-bmc-pm.bmc.pm.log 00:03:33.038 12:18:02 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:33.038 12:18:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:33.038 12:18:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:33.038 12:18:02 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.038 12:18:02 -- spdk/autobuild.sh@16 -- $ date -u 00:03:33.038 Tue Nov 5 11:18:02 AM UTC 2024 00:03:33.038 12:18:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:33.038 v25.01-pre-158-gf220d590c 00:03:33.038 12:18:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:33.038 12:18:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:33.038 12:18:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:33.038 12:18:02 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:33.038 12:18:02 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:33.038 12:18:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.038 ************************************ 00:03:33.038 START TEST ubsan 00:03:33.038 ************************************ 00:03:33.038 12:18:02 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:03:33.038 using ubsan 00:03:33.038 00:03:33.038 real 0m0.000s 00:03:33.038 user 0m0.000s 00:03:33.038 sys 0m0.000s 00:03:33.038 12:18:02 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:33.038 12:18:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:33.038 ************************************ 00:03:33.038 END TEST ubsan 00:03:33.038 ************************************ 00:03:33.038 12:18:02 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:33.038 12:18:02 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:33.038 12:18:02 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:33.038 12:18:02 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:03:33.038 12:18:02 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:33.038 12:18:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.038 ************************************ 00:03:33.038 START TEST build_native_dpdk 00:03:33.038 ************************************ 00:03:33.038 12:18:02 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:03:33.038 eeb0605f11 version: 23.11.0 00:03:33.038 238778122a doc: update release notes for 23.11 00:03:33.038 46aa6b3cfc doc: fix description of RSS features 00:03:33.038 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:33.038 7e421ae345 devtools: support skipping forbid rule check 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:33.038 patching file config/rte_config.h 00:03:33.038 Hunk #1 succeeded at 60 (offset 1 line). 00:03:33.038 12:18:02 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:33.038 12:18:02 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:33.039 patching file lib/pcapng/rte_pcapng.c 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:33.039 12:18:02 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:33.039 12:18:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:38.322 The Meson build system 00:03:38.322 Version: 1.5.0 00:03:38.322 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:38.322 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:03:38.322 Build type: native build 00:03:38.322 Program cat found: YES (/usr/bin/cat) 00:03:38.322 Project name: DPDK 00:03:38.322 Project version: 23.11.0 00:03:38.322 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:38.322 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:38.322 Host machine cpu family: x86_64 00:03:38.322 Host machine cpu: x86_64 00:03:38.322 Message: ## Building in Developer Mode ## 00:03:38.322 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:38.322 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:03:38.322 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:03:38.322 Program python3 found: YES (/usr/bin/python3) 00:03:38.322 Program cat found: YES (/usr/bin/cat) 00:03:38.322 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:38.322 Compiler for C supports arguments -march=native: YES 00:03:38.322 Checking for size of "void *" : 8 00:03:38.322 Checking for size of "void *" : 8 (cached) 00:03:38.322 Library m found: YES 00:03:38.322 Library numa found: YES 00:03:38.322 Has header "numaif.h" : YES 00:03:38.322 Library fdt found: NO 00:03:38.322 Library execinfo found: NO 00:03:38.322 Has header "execinfo.h" : YES 00:03:38.322 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:38.323 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:38.323 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:38.323 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:38.323 Run-time dependency openssl found: YES 3.1.1 00:03:38.323 Run-time dependency libpcap found: YES 1.10.4 00:03:38.323 Has header "pcap.h" with dependency libpcap: YES 00:03:38.323 Compiler for C supports arguments -Wcast-qual: YES 00:03:38.323 Compiler for C supports arguments -Wdeprecated: YES 00:03:38.323 Compiler for C supports arguments -Wformat: YES 00:03:38.323 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:38.323 Compiler for C supports arguments -Wformat-security: NO 00:03:38.323 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:38.323 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:38.323 Compiler for C supports arguments -Wnested-externs: YES 00:03:38.323 Compiler for C supports arguments -Wold-style-definition: YES 00:03:38.323 Compiler for C supports arguments -Wpointer-arith: YES 00:03:38.323 Compiler for C supports arguments -Wsign-compare: YES 00:03:38.323 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:38.323 Compiler for C supports arguments -Wundef: YES 00:03:38.323 Compiler for C supports arguments -Wwrite-strings: YES 00:03:38.323 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:38.323 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:38.323 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:38.323 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:38.323 Program objdump found: YES (/usr/bin/objdump) 00:03:38.323 Compiler for C supports arguments -mavx512f: YES 00:03:38.323 Checking if "AVX512 checking" compiles: YES 00:03:38.323 Fetching value of define "__SSE4_2__" : 1 00:03:38.323 Fetching value of define "__AES__" : 1 00:03:38.323 Fetching value of define "__AVX__" : 1 00:03:38.323 Fetching value of define "__AVX2__" : (undefined) 00:03:38.323 Fetching value of define "__AVX512BW__" : (undefined) 00:03:38.323 Fetching value of define "__AVX512CD__" : (undefined) 00:03:38.323 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:38.323 Fetching value of define "__AVX512F__" : (undefined) 00:03:38.323 Fetching value of define "__AVX512VL__" : (undefined) 00:03:38.323 Fetching value of define "__PCLMUL__" : 1 00:03:38.323 Fetching value of define "__RDRND__" : 1 00:03:38.323 Fetching value of define "__RDSEED__" : (undefined) 00:03:38.323 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:38.323 Fetching value of define "__znver1__" : (undefined) 00:03:38.323 Fetching value of define "__znver2__" : (undefined) 00:03:38.323 Fetching value of define "__znver3__" : (undefined) 00:03:38.323 Fetching value of define "__znver4__" : (undefined) 00:03:38.323 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:38.323 Message: lib/log: Defining dependency "log" 00:03:38.323 Message: lib/kvargs: Defining dependency "kvargs" 00:03:38.323 Message: lib/telemetry: Defining dependency "telemetry" 00:03:38.323 Checking for function "getentropy" : NO 00:03:38.323 Message: lib/eal: Defining dependency "eal" 00:03:38.323 Message: lib/ring: Defining dependency "ring" 00:03:38.323 Message: lib/rcu: Defining dependency "rcu" 00:03:38.323 Message: lib/mempool: Defining dependency "mempool" 00:03:38.323 Message: lib/mbuf: Defining dependency "mbuf" 00:03:38.323 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:38.323 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:38.323 Compiler for C supports arguments -mpclmul: YES 00:03:38.323 Compiler for C supports arguments -maes: YES 00:03:38.323 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:38.323 Compiler for C supports arguments -mavx512bw: YES 00:03:38.323 Compiler for C supports arguments -mavx512dq: YES 00:03:38.323 Compiler for C supports arguments -mavx512vl: YES 00:03:38.323 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:38.323 Compiler for C supports arguments -mavx2: YES 00:03:38.323 Compiler for C supports arguments -mavx: YES 00:03:38.323 Message: lib/net: Defining dependency "net" 00:03:38.323 Message: lib/meter: Defining dependency "meter" 00:03:38.323 Message: lib/ethdev: Defining dependency "ethdev" 00:03:38.323 Message: lib/pci: Defining dependency "pci" 00:03:38.323 Message: lib/cmdline: Defining dependency "cmdline" 00:03:38.323 Message: lib/metrics: Defining dependency "metrics" 00:03:38.323 Message: lib/hash: Defining dependency "hash" 00:03:38.323 Message: lib/timer: Defining dependency "timer" 00:03:38.323 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:38.323 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:38.323 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:38.323 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:38.323 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:38.323 Message: lib/acl: Defining dependency "acl" 00:03:38.323 Message: lib/bbdev: Defining dependency "bbdev" 00:03:38.323 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:38.323 Run-time dependency libelf found: YES 0.191 00:03:38.323 Message: lib/bpf: Defining dependency "bpf" 00:03:38.323 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:38.323 Message: lib/compressdev: Defining dependency "compressdev" 00:03:38.323 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:38.323 Message: lib/distributor: Defining dependency "distributor" 00:03:38.323 Message: lib/dmadev: Defining dependency "dmadev" 00:03:38.323 Message: lib/efd: Defining dependency "efd" 00:03:38.323 Message: lib/eventdev: Defining dependency "eventdev" 00:03:38.323 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:38.323 Message: lib/gpudev: Defining dependency "gpudev" 00:03:38.323 Message: lib/gro: Defining dependency "gro" 00:03:38.323 Message: lib/gso: Defining dependency "gso" 00:03:38.323 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:38.323 Message: lib/jobstats: Defining dependency "jobstats" 00:03:38.323 Message: lib/latencystats: Defining dependency "latencystats" 00:03:38.323 Message: lib/lpm: Defining dependency "lpm" 00:03:38.323 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:38.323 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:38.323 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:38.323 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:38.323 Message: lib/member: Defining dependency "member" 00:03:38.323 Message: lib/pcapng: Defining dependency "pcapng" 00:03:38.324 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:38.324 Message: lib/power: Defining dependency "power" 00:03:38.324 Message: lib/rawdev: Defining dependency "rawdev" 00:03:38.324 Message: lib/regexdev: Defining dependency "regexdev" 00:03:38.324 Message: lib/mldev: Defining dependency "mldev" 00:03:38.324 Message: lib/rib: Defining dependency "rib" 00:03:38.324 Message: lib/reorder: Defining dependency "reorder" 00:03:38.324 Message: lib/sched: Defining dependency "sched" 00:03:38.324 Message: lib/security: Defining dependency "security" 00:03:38.324 Message: lib/stack: Defining dependency "stack" 00:03:38.324 Has header "linux/userfaultfd.h" : YES 00:03:38.324 Has header "linux/vduse.h" : YES 00:03:38.324 Message: lib/vhost: Defining dependency "vhost" 00:03:38.324 Message: lib/ipsec: Defining dependency "ipsec" 00:03:38.324 Message: lib/pdcp: Defining dependency "pdcp" 00:03:38.324 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:38.324 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:38.324 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:38.324 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:38.324 Message: lib/fib: Defining dependency "fib" 00:03:38.324 Message: lib/port: Defining dependency "port" 00:03:38.324 Message: lib/pdump: Defining dependency "pdump" 00:03:38.324 Message: lib/table: Defining dependency "table" 00:03:38.324 Message: lib/pipeline: Defining dependency "pipeline" 00:03:38.324 Message: lib/graph: Defining dependency "graph" 00:03:38.324 Message: lib/node: Defining dependency "node" 00:03:39.713 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:39.713 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:39.713 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:39.713 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:39.713 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:39.713 Compiler for C supports arguments -Wno-unused-value: YES 00:03:39.713 Compiler for C supports arguments -Wno-format: YES 00:03:39.713 Compiler for C supports arguments -Wno-format-security: YES 00:03:39.713 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:39.713 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:39.713 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:39.713 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:39.713 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:39.713 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:39.713 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:39.713 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:39.713 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:39.713 Has header "sys/epoll.h" : YES 00:03:39.713 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:39.713 Configuring doxy-api-html.conf using configuration 00:03:39.713 Configuring doxy-api-man.conf using configuration 00:03:39.713 Program mandb found: YES (/usr/bin/mandb) 00:03:39.713 Program sphinx-build found: NO 00:03:39.713 Configuring rte_build_config.h using configuration 00:03:39.713 Message: 00:03:39.713 ================= 00:03:39.713 Applications Enabled 00:03:39.713 ================= 00:03:39.714 00:03:39.714 apps: 00:03:39.714 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:39.714 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:39.714 test-pmd, test-regex, test-sad, test-security-perf, 00:03:39.714 00:03:39.714 Message: 00:03:39.714 ================= 00:03:39.714 Libraries Enabled 00:03:39.714 ================= 00:03:39.714 00:03:39.714 libs: 00:03:39.714 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:39.714 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:39.714 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:39.714 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:39.714 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:39.714 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:39.714 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:39.714 00:03:39.714 00:03:39.714 Message: 00:03:39.714 =============== 00:03:39.714 Drivers Enabled 00:03:39.714 =============== 00:03:39.714 00:03:39.714 common: 00:03:39.714 00:03:39.714 bus: 00:03:39.714 pci, vdev, 00:03:39.714 mempool: 00:03:39.714 ring, 00:03:39.714 dma: 00:03:39.714 00:03:39.714 net: 00:03:39.714 i40e, 00:03:39.714 raw: 00:03:39.714 00:03:39.714 crypto: 00:03:39.714 00:03:39.714 compress: 00:03:39.714 00:03:39.714 regex: 00:03:39.714 00:03:39.714 ml: 00:03:39.714 00:03:39.714 vdpa: 00:03:39.714 00:03:39.714 event: 00:03:39.714 00:03:39.714 baseband: 00:03:39.714 00:03:39.714 gpu: 00:03:39.714 00:03:39.714 00:03:39.714 Message: 00:03:39.714 ================= 00:03:39.714 Content Skipped 00:03:39.714 ================= 00:03:39.714 00:03:39.714 apps: 00:03:39.714 00:03:39.714 libs: 00:03:39.714 00:03:39.714 drivers: 00:03:39.714 common/cpt: not in enabled drivers build config 00:03:39.714 common/dpaax: not in enabled drivers build config 00:03:39.714 common/iavf: not in enabled drivers build config 00:03:39.714 common/idpf: not in enabled drivers build config 00:03:39.714 common/mvep: not in enabled drivers build config 00:03:39.714 common/octeontx: not in enabled drivers build config 00:03:39.714 bus/auxiliary: not in enabled drivers build config 00:03:39.714 bus/cdx: not in enabled drivers build config 00:03:39.714 bus/dpaa: not in enabled drivers build config 00:03:39.714 bus/fslmc: not in enabled drivers build config 00:03:39.714 bus/ifpga: not in enabled drivers build config 00:03:39.714 bus/platform: not in enabled drivers build config 00:03:39.714 bus/vmbus: not in enabled drivers build config 00:03:39.714 common/cnxk: not in enabled drivers build config 00:03:39.714 common/mlx5: not in enabled drivers build config 00:03:39.714 common/nfp: not in enabled drivers build config 00:03:39.714 common/qat: not in enabled drivers build config 00:03:39.714 common/sfc_efx: not in enabled drivers build config 00:03:39.714 mempool/bucket: not in enabled drivers build config 00:03:39.714 mempool/cnxk: not in enabled drivers build config 00:03:39.714 mempool/dpaa: not in enabled drivers build config 00:03:39.714 mempool/dpaa2: not in enabled drivers build config 00:03:39.714 mempool/octeontx: not in enabled drivers build config 00:03:39.714 mempool/stack: not in enabled drivers build config 00:03:39.714 dma/cnxk: not in enabled drivers build config 00:03:39.714 dma/dpaa: not in enabled drivers build config 00:03:39.714 dma/dpaa2: not in enabled drivers build config 00:03:39.714 dma/hisilicon: not in enabled drivers build config 00:03:39.714 dma/idxd: not in enabled drivers build config 00:03:39.714 dma/ioat: not in enabled drivers build config 00:03:39.714 dma/skeleton: not in enabled drivers build config 00:03:39.714 net/af_packet: not in enabled drivers build config 00:03:39.714 net/af_xdp: not in enabled drivers build config 00:03:39.714 net/ark: not in enabled drivers build config 00:03:39.714 net/atlantic: not in enabled drivers build config 00:03:39.714 net/avp: not in enabled drivers build config 00:03:39.714 net/axgbe: not in enabled drivers build config 00:03:39.714 net/bnx2x: not in enabled drivers build config 00:03:39.714 net/bnxt: not in enabled drivers build config 00:03:39.714 net/bonding: not in enabled drivers build config 00:03:39.714 net/cnxk: not in enabled drivers build config 00:03:39.714 net/cpfl: not in enabled drivers build config 00:03:39.714 net/cxgbe: not in enabled drivers build config 00:03:39.714 net/dpaa: not in enabled drivers build config 00:03:39.714 net/dpaa2: not in enabled drivers build config 00:03:39.714 net/e1000: not in enabled drivers build config 00:03:39.714 net/ena: not in enabled drivers build config 00:03:39.714 net/enetc: not in enabled drivers build config 00:03:39.714 net/enetfec: not in enabled drivers build config 00:03:39.714 net/enic: not in enabled drivers build config 00:03:39.714 net/failsafe: not in enabled drivers build config 00:03:39.714 net/fm10k: not in enabled drivers build config 00:03:39.714 net/gve: not in enabled drivers build config 00:03:39.714 net/hinic: not in enabled drivers build config 00:03:39.714 net/hns3: not in enabled drivers build config 00:03:39.714 net/iavf: not in enabled drivers build config 00:03:39.714 net/ice: not in enabled drivers build config 00:03:39.714 net/idpf: not in enabled drivers build config 00:03:39.714 net/igc: not in enabled drivers build config 00:03:39.714 net/ionic: not in enabled drivers build config 00:03:39.714 net/ipn3ke: not in enabled drivers build config 00:03:39.714 net/ixgbe: not in enabled drivers build config 00:03:39.714 net/mana: not in enabled drivers build config 00:03:39.714 net/memif: not in enabled drivers build config 00:03:39.714 net/mlx4: not in enabled drivers build config 00:03:39.714 net/mlx5: not in enabled drivers build config 00:03:39.714 net/mvneta: not in enabled drivers build config 00:03:39.714 net/mvpp2: not in enabled drivers build config 00:03:39.714 net/netvsc: not in enabled drivers build config 00:03:39.714 net/nfb: not in enabled drivers build config 00:03:39.714 net/nfp: not in enabled drivers build config 00:03:39.714 net/ngbe: not in enabled drivers build config 00:03:39.714 net/null: not in enabled drivers build config 00:03:39.714 net/octeontx: not in enabled drivers build config 00:03:39.714 net/octeon_ep: not in enabled drivers build config 00:03:39.714 net/pcap: not in enabled drivers build config 00:03:39.714 net/pfe: not in enabled drivers build config 00:03:39.714 net/qede: not in enabled drivers build config 00:03:39.714 net/ring: not in enabled drivers build config 00:03:39.714 net/sfc: not in enabled drivers build config 00:03:39.714 net/softnic: not in enabled drivers build config 00:03:39.714 net/tap: not in enabled drivers build config 00:03:39.714 net/thunderx: not in enabled drivers build config 00:03:39.714 net/txgbe: not in enabled drivers build config 00:03:39.714 net/vdev_netvsc: not in enabled drivers build config 00:03:39.714 net/vhost: not in enabled drivers build config 00:03:39.714 net/virtio: not in enabled drivers build config 00:03:39.714 net/vmxnet3: not in enabled drivers build config 00:03:39.714 raw/cnxk_bphy: not in enabled drivers build config 00:03:39.714 raw/cnxk_gpio: not in enabled drivers build config 00:03:39.714 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:39.714 raw/ifpga: not in enabled drivers build config 00:03:39.714 raw/ntb: not in enabled drivers build config 00:03:39.714 raw/skeleton: not in enabled drivers build config 00:03:39.714 crypto/armv8: not in enabled drivers build config 00:03:39.714 crypto/bcmfs: not in enabled drivers build config 00:03:39.714 crypto/caam_jr: not in enabled drivers build config 00:03:39.714 crypto/ccp: not in enabled drivers build config 00:03:39.714 crypto/cnxk: not in enabled drivers build config 00:03:39.714 crypto/dpaa_sec: not in enabled drivers build config 00:03:39.714 crypto/dpaa2_sec: not in enabled drivers build config 00:03:39.714 crypto/ipsec_mb: not in enabled drivers build config 00:03:39.714 crypto/mlx5: not in enabled drivers build config 00:03:39.714 crypto/mvsam: not in enabled drivers build config 00:03:39.714 crypto/nitrox: not in enabled drivers build config 00:03:39.714 crypto/null: not in enabled drivers build config 00:03:39.714 crypto/octeontx: not in enabled drivers build config 00:03:39.714 crypto/openssl: not in enabled drivers build config 00:03:39.714 crypto/scheduler: not in enabled drivers build config 00:03:39.714 crypto/uadk: not in enabled drivers build config 00:03:39.714 crypto/virtio: not in enabled drivers build config 00:03:39.715 compress/isal: not in enabled drivers build config 00:03:39.715 compress/mlx5: not in enabled drivers build config 00:03:39.715 compress/octeontx: not in enabled drivers build config 00:03:39.715 compress/zlib: not in enabled drivers build config 00:03:39.715 regex/mlx5: not in enabled drivers build config 00:03:39.715 regex/cn9k: not in enabled drivers build config 00:03:39.715 ml/cnxk: not in enabled drivers build config 00:03:39.715 vdpa/ifc: not in enabled drivers build config 00:03:39.715 vdpa/mlx5: not in enabled drivers build config 00:03:39.715 vdpa/nfp: not in enabled drivers build config 00:03:39.715 vdpa/sfc: not in enabled drivers build config 00:03:39.715 event/cnxk: not in enabled drivers build config 00:03:39.715 event/dlb2: not in enabled drivers build config 00:03:39.715 event/dpaa: not in enabled drivers build config 00:03:39.715 event/dpaa2: not in enabled drivers build config 00:03:39.715 event/dsw: not in enabled drivers build config 00:03:39.715 event/opdl: not in enabled drivers build config 00:03:39.715 event/skeleton: not in enabled drivers build config 00:03:39.715 event/sw: not in enabled drivers build config 00:03:39.715 event/octeontx: not in enabled drivers build config 00:03:39.715 baseband/acc: not in enabled drivers build config 00:03:39.715 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:39.715 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:39.715 baseband/la12xx: not in enabled drivers build config 00:03:39.715 baseband/null: not in enabled drivers build config 00:03:39.715 baseband/turbo_sw: not in enabled drivers build config 00:03:39.715 gpu/cuda: not in enabled drivers build config 00:03:39.715 00:03:39.715 00:03:39.715 Build targets in project: 220 00:03:39.715 00:03:39.715 DPDK 23.11.0 00:03:39.715 00:03:39.715 User defined options 00:03:39.715 libdir : lib 00:03:39.715 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:39.715 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:39.715 c_link_args : 00:03:39.715 enable_docs : false 00:03:39.715 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:39.715 enable_kmods : false 00:03:39.715 machine : native 00:03:39.715 tests : false 00:03:39.715 00:03:39.715 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:39.715 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:39.715 12:18:08 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:03:39.715 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:39.715 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:39.715 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:39.715 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:39.715 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:39.715 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:39.715 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:39.715 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:39.715 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:39.715 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:39.715 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:39.715 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:39.715 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:39.715 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:39.715 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:39.715 [15/710] Linking static target lib/librte_kvargs.a 00:03:39.974 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:39.974 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:39.974 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:39.974 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:39.974 [20/710] Linking static target lib/librte_log.a 00:03:39.974 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:40.236 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.820 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:40.820 [24/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:40.820 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:40.820 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:40.820 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:40.820 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:40.820 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:40.820 [30/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:40.820 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:40.820 [32/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:40.820 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:40.820 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:40.820 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:40.820 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:40.820 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:40.820 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:40.820 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:40.820 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:40.821 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:40.821 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:40.821 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:40.821 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:40.821 [45/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.821 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:41.083 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:41.083 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:41.083 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:41.083 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:41.083 [51/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:41.083 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:41.083 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:41.083 [54/710] Linking target lib/librte_log.so.24.0 00:03:41.083 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:41.083 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:41.083 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:41.083 [58/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:41.083 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:41.083 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:41.348 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:41.348 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:41.348 [63/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:41.348 [64/710] Linking target lib/librte_kvargs.so.24.0 00:03:41.348 [65/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:41.348 [66/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:41.611 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:41.611 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:41.611 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:41.611 [70/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:41.611 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:41.611 [72/710] Linking static target lib/librte_pci.a 00:03:41.611 [73/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:41.611 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:41.611 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:41.870 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:41.870 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:41.870 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:41.870 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:41.870 [80/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:41.870 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:41.870 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:41.870 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:41.870 [84/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.870 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:41.870 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:42.135 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:42.135 [88/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:42.135 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:42.135 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:42.135 [91/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:42.135 [92/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:42.135 [93/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:42.135 [94/710] Linking static target lib/librte_ring.a 00:03:42.135 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:42.135 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:42.135 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:42.135 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:42.135 [99/710] Linking static target lib/librte_meter.a 00:03:42.135 [100/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:42.135 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:42.135 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:42.397 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.397 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:42.397 [105/710] Linking static target lib/librte_telemetry.a 00:03:42.397 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:42.397 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:42.397 [108/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:42.397 [109/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:42.397 [110/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:42.397 [111/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:42.397 [112/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:42.397 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:42.397 [114/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.660 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.660 [116/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:42.660 [117/710] Linking static target lib/librte_eal.a 00:03:42.660 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:42.660 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:42.660 [120/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:42.660 [121/710] Linking static target lib/librte_net.a 00:03:42.660 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:42.660 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:42.920 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:42.920 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:42.920 [126/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:42.920 [127/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:42.920 [128/710] Linking static target lib/librte_mempool.a 00:03:42.920 [129/710] Linking static target lib/librte_cmdline.a 00:03:42.920 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.181 [131/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:43.181 [132/710] Linking static target lib/librte_cfgfile.a 00:03:43.181 [133/710] Linking target lib/librte_telemetry.so.24.0 00:03:43.181 [134/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.181 [135/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:43.181 [136/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:43.181 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:43.181 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:43.181 [139/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:43.181 [140/710] Linking static target lib/librte_metrics.a 00:03:43.445 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:43.446 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:43.446 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:43.446 [144/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:43.446 [145/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:43.446 [146/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:43.446 [147/710] Linking static target lib/librte_rcu.a 00:03:43.446 [148/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:43.706 [149/710] Linking static target lib/librte_bitratestats.a 00:03:43.706 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:43.706 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:43.706 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:43.706 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.706 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:43.706 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:43.706 [156/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:43.975 [157/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:43.975 [158/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:43.975 [159/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:43.975 [160/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.975 [161/710] Linking static target lib/librte_timer.a 00:03:43.975 [162/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.975 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.975 [164/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.975 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:43.975 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:44.236 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:44.236 [168/710] Linking static target lib/librte_bbdev.a 00:03:44.236 [169/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:44.236 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:44.236 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.498 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:44.498 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:44.498 [174/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:44.498 [175/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:44.498 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.498 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:44.498 [178/710] Linking static target lib/librte_compressdev.a 00:03:44.498 [179/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:44.498 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:44.764 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:44.764 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:45.023 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:45.024 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:45.024 [185/710] Linking static target lib/librte_distributor.a 00:03:45.024 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:45.024 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.289 [188/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:45.289 [189/710] Linking static target lib/librte_dmadev.a 00:03:45.289 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:45.289 [191/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:45.289 [192/710] Linking static target lib/librte_bpf.a 00:03:45.289 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.289 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:45.289 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:45.289 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:45.289 [197/710] Linking static target lib/librte_dispatcher.a 00:03:45.551 [198/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:45.551 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:45.551 [200/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:45.551 [201/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.551 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:45.551 [203/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:45.551 [204/710] Linking static target lib/librte_gpudev.a 00:03:45.551 [205/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:45.551 [206/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:45.551 [207/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:45.551 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:45.551 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:45.551 [210/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:45.551 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:45.814 [212/710] Linking static target lib/librte_gro.a 00:03:45.814 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:45.814 [214/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.814 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:45.814 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.814 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:45.814 [218/710] Linking static target lib/librte_jobstats.a 00:03:45.814 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:46.075 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:46.075 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.075 [222/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:46.075 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.341 [224/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:46.341 [225/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:46.341 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:46.341 [227/710] Linking static target lib/librte_latencystats.a 00:03:46.341 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.341 [229/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:46.341 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:46.341 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:46.341 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:46.341 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:46.608 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:46.608 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:46.608 [236/710] Linking static target lib/librte_ip_frag.a 00:03:46.608 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:46.874 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.874 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:46.874 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:46.874 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:46.874 [242/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.874 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:46.874 [244/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:46.874 [245/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:47.142 [246/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:47.142 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:47.143 [248/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.143 [249/710] Linking static target lib/librte_gso.a 00:03:47.143 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:47.143 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:47.406 [252/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:47.406 [253/710] Linking static target lib/librte_regexdev.a 00:03:47.406 [254/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:47.406 [255/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:47.406 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:47.406 [257/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:47.406 [258/710] Linking static target lib/librte_rawdev.a 00:03:47.406 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.406 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:47.406 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:47.668 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:47.668 [263/710] Linking static target lib/librte_mldev.a 00:03:47.668 [264/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:47.668 [265/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:47.668 [266/710] Linking static target lib/librte_efd.a 00:03:47.668 [267/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:47.668 [268/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:47.668 [269/710] Linking static target lib/librte_pcapng.a 00:03:47.668 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:47.668 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:03:47.668 [272/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:47.668 [273/710] Linking static target lib/librte_lpm.a 00:03:47.932 [274/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:47.932 [275/710] Linking static target lib/librte_stack.a 00:03:47.932 [276/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:47.932 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:47.932 [278/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:47.932 [279/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:47.932 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:48.196 [281/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:48.196 [282/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.196 [283/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.196 [284/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.196 [285/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.196 [286/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:48.196 [287/710] Linking static target lib/librte_hash.a 00:03:48.196 [288/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.459 [289/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:48.459 [290/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:48.459 [291/710] Linking static target lib/librte_reorder.a 00:03:48.459 [292/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:48.459 [293/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:48.459 [294/710] Linking static target lib/librte_power.a 00:03:48.459 [295/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:48.459 [296/710] Linking static target lib/acl/libavx512_tmp.a 00:03:48.459 [297/710] Linking static target lib/librte_acl.a 00:03:48.459 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:48.459 [299/710] Linking static target lib/librte_security.a 00:03:48.459 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.723 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:48.723 [302/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:48.723 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:48.723 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:48.723 [305/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:48.723 [306/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:48.723 [307/710] Linking static target lib/librte_rib.a 00:03:48.982 [308/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.982 [309/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:48.982 [310/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:48.982 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:48.982 [312/710] Linking static target lib/librte_mbuf.a 00:03:48.982 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:48.982 [314/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.982 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:48.982 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:49.243 [317/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.243 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.243 [319/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:49.243 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:49.243 [321/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:49.243 [322/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:49.243 [323/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:49.243 [324/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:49.243 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:49.509 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.509 [327/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.509 [328/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:49.773 [329/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:49.773 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.773 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:49.773 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:49.773 [333/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.032 [334/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:50.032 [335/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:50.295 [336/710] Linking static target lib/librte_member.a 00:03:50.295 [337/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:50.295 [338/710] Linking static target lib/librte_cryptodev.a 00:03:50.295 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:50.295 [340/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:50.295 [341/710] Linking static target lib/librte_eventdev.a 00:03:50.295 [342/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:50.295 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:50.295 [344/710] Linking static target lib/librte_ethdev.a 00:03:50.295 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:50.556 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:50.556 [347/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:50.556 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:50.556 [349/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:50.556 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:50.556 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:50.556 [352/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:50.556 [353/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:50.556 [354/710] Linking static target lib/librte_sched.a 00:03:50.556 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:50.556 [356/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:50.556 [357/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.556 [358/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:50.822 [359/710] Linking static target lib/librte_fib.a 00:03:50.822 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:50.822 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:50.822 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:51.083 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:51.083 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:51.083 [365/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:51.083 [366/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:51.083 [367/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:51.083 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:51.083 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:51.083 [370/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:51.345 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.345 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:51.345 [373/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.345 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:51.607 [375/710] Linking static target lib/librte_pdump.a 00:03:51.607 [376/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:51.607 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:51.607 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:51.607 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:51.871 [380/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:51.871 [381/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:51.871 [382/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:51.871 [383/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:51.871 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:51.871 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:51.871 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:51.871 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:51.871 [388/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.871 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:52.132 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:52.132 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:52.132 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:52.132 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:52.132 [394/710] Linking static target lib/librte_ipsec.a 00:03:52.132 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:52.400 [396/710] Linking static target lib/librte_table.a 00:03:52.400 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.400 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:52.400 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:52.400 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:52.663 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:52.663 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.925 [403/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:52.925 [404/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:53.189 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:53.189 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:53.189 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:53.189 [408/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:53.189 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:53.189 [410/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:53.189 [411/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:53.189 [412/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:53.189 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:53.189 [414/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:53.456 [415/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.456 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:53.456 [417/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:53.456 [418/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:53.719 [419/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:53.719 [420/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:53.719 [421/710] Linking static target drivers/librte_bus_vdev.a 00:03:53.719 [422/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.719 [423/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.719 [424/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:53.719 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:53.719 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:53.719 [427/710] Linking static target lib/librte_port.a 00:03:53.979 [428/710] Linking target lib/librte_eal.so.24.0 00:03:53.979 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:53.979 [430/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:53.979 [431/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.979 [432/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:53.979 [433/710] Linking static target drivers/librte_bus_pci.a 00:03:53.979 [434/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:54.240 [435/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:54.240 [436/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:54.240 [437/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:54.240 [438/710] Linking target lib/librte_ring.so.24.0 00:03:54.240 [439/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:54.240 [440/710] Linking target lib/librte_pci.so.24.0 00:03:54.240 [441/710] Linking target lib/librte_timer.so.24.0 00:03:54.240 [442/710] Linking target lib/librte_meter.so.24.0 00:03:54.240 [443/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:54.503 [444/710] Linking target lib/librte_acl.so.24.0 00:03:54.503 [445/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:54.503 [446/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:54.503 [447/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:54.504 [448/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:54.504 [449/710] Linking target lib/librte_cfgfile.so.24.0 00:03:54.504 [450/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:54.504 [451/710] Linking target lib/librte_rcu.so.24.0 00:03:54.504 [452/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:54.504 [453/710] Linking target lib/librte_mempool.so.24.0 00:03:54.504 [454/710] Linking target lib/librte_dmadev.so.24.0 00:03:54.504 [455/710] Linking target lib/librte_jobstats.so.24.0 00:03:54.504 [456/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:54.504 [457/710] Linking target lib/librte_stack.so.24.0 00:03:54.504 [458/710] Linking target lib/librte_rawdev.so.24.0 00:03:54.504 [459/710] Linking static target lib/librte_graph.a 00:03:54.504 [460/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:54.766 [461/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:54.766 [462/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:54.766 [463/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.766 [464/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:54.766 [465/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:54.766 [466/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:54.766 [467/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:54.766 [468/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:55.030 [469/710] Linking target lib/librte_mbuf.so.24.0 00:03:55.030 [470/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.030 [471/710] Linking target lib/librte_rib.so.24.0 00:03:55.030 [472/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:55.030 [473/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:55.030 [474/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:55.030 [475/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:55.030 [476/710] Linking static target drivers/librte_mempool_ring.a 00:03:55.030 [477/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:55.030 [478/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:55.030 [479/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:55.030 [480/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:55.030 [481/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:55.030 [482/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:55.030 [483/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:55.030 [484/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:55.030 [485/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:55.300 [486/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:55.300 [487/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:55.300 [488/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:55.300 [489/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:55.300 [490/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:55.300 [491/710] Linking target lib/librte_fib.so.24.0 00:03:55.300 [492/710] Linking target lib/librte_net.so.24.0 00:03:55.300 [493/710] Linking target lib/librte_bbdev.so.24.0 00:03:55.300 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:55.300 [495/710] Linking target lib/librte_compressdev.so.24.0 00:03:55.300 [496/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:55.300 [497/710] Linking target lib/librte_distributor.so.24.0 00:03:55.300 [498/710] Linking target lib/librte_gpudev.so.24.0 00:03:55.300 [499/710] Linking target lib/librte_cryptodev.so.24.0 00:03:55.300 [500/710] Linking target lib/librte_regexdev.so.24.0 00:03:55.300 [501/710] Linking target lib/librte_mldev.so.24.0 00:03:55.300 [502/710] Linking target lib/librte_reorder.so.24.0 00:03:55.300 [503/710] Linking target lib/librte_sched.so.24.0 00:03:55.300 [504/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:55.300 [505/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:55.560 [506/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:55.560 [507/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:55.560 [508/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:55.560 [509/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:55.560 [510/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:55.560 [511/710] Linking target lib/librte_cmdline.so.24.0 00:03:55.560 [512/710] Linking target lib/librte_hash.so.24.0 00:03:55.560 [513/710] Linking target lib/librte_security.so.24.0 00:03:55.826 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:55.826 [515/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.826 [516/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:55.826 [517/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:55.826 [518/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:55.826 [519/710] Linking target lib/librte_efd.so.24.0 00:03:55.826 [520/710] Linking target lib/librte_lpm.so.24.0 00:03:56.091 [521/710] Linking target lib/librte_member.so.24.0 00:03:56.091 [522/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:56.091 [523/710] Linking target lib/librte_ipsec.so.24.0 00:03:56.091 [524/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:56.091 [525/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:56.091 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:56.092 [527/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:56.360 [528/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:56.360 [529/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:56.360 [530/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:56.360 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:56.360 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:56.621 [533/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:56.621 [534/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:56.621 [535/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:56.621 [536/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:56.881 [537/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:56.881 [538/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:56.881 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:56.881 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:57.147 [541/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:57.147 [542/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:57.147 [543/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:57.411 [544/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:57.411 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:57.411 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:57.411 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:57.411 [548/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:57.411 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:57.411 [550/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:57.411 [551/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:57.672 [552/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:57.672 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:57.672 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:57.672 [555/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:57.937 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:57.937 [557/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:57.937 [558/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:57.937 [559/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:58.199 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:58.459 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:58.459 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:58.739 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:58.739 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:58.739 [565/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.739 [566/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:58.739 [567/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:58.739 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:58.739 [569/710] Linking target lib/librte_ethdev.so.24.0 00:03:59.022 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:59.022 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:59.022 [572/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:59.022 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:59.022 [574/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:59.022 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:59.022 [576/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:59.022 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:59.322 [578/710] Linking target lib/librte_metrics.so.24.0 00:03:59.322 [579/710] Linking target lib/librte_bpf.so.24.0 00:03:59.322 [580/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:59.322 [581/710] Linking target lib/librte_gro.so.24.0 00:03:59.322 [582/710] Linking target lib/librte_eventdev.so.24.0 00:03:59.322 [583/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:59.322 [584/710] Linking target lib/librte_gso.so.24.0 00:03:59.322 [585/710] Linking target lib/librte_ip_frag.so.24.0 00:03:59.322 [586/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:59.322 [587/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:59.322 [588/710] Linking static target lib/librte_pdcp.a 00:03:59.322 [589/710] Linking target lib/librte_pcapng.so.24.0 00:03:59.322 [590/710] Linking target lib/librte_power.so.24.0 00:03:59.322 [591/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:59.629 [592/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:59.629 [593/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:59.629 [594/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:59.629 [595/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:59.629 [596/710] Linking target lib/librte_bitratestats.so.24.0 00:03:59.629 [597/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:59.629 [598/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:59.629 [599/710] Linking target lib/librte_dispatcher.so.24.0 00:03:59.629 [600/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:59.629 [601/710] Linking target lib/librte_latencystats.so.24.0 00:03:59.629 [602/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:59.629 [603/710] Linking target lib/librte_pdump.so.24.0 00:03:59.629 [604/710] Linking target lib/librte_graph.so.24.0 00:03:59.629 [605/710] Linking target lib/librte_port.so.24.0 00:03:59.916 [606/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:59.916 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:59.916 [608/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:59.916 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:59.916 [610/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.916 [611/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:59.916 [612/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:59.916 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:00.202 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:00.202 [615/710] Linking target lib/librte_pdcp.so.24.0 00:04:00.202 [616/710] Linking target lib/librte_table.so.24.0 00:04:00.202 [617/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:00.202 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:00.202 [619/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:04:00.202 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:00.468 [621/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:04:00.468 [622/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:04:00.468 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:00.468 [624/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:04:00.469 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:00.469 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:00.728 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:00.728 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:04:00.728 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:00.989 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:01.249 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:01.249 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:01.249 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:04:01.249 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:01.249 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:01.249 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:01.508 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:01.508 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:01.508 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:01.508 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:01.508 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:01.768 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:01.768 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:01.768 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:01.768 [645/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:01.768 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:02.027 [647/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:04:02.027 [648/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:02.027 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:02.027 [650/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:02.286 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:02.286 [652/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:02.545 [653/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:04:02.545 [654/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:04:02.545 [655/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:02.545 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:04:02.804 [657/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:02.804 [658/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:04:02.804 [659/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:02.804 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:04:02.804 [661/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:02.804 [662/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:04:03.062 [663/710] Linking static target drivers/librte_net_i40e.a 00:04:03.062 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:04:03.063 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:03.321 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:03.321 [667/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:04:03.580 [668/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.580 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:04:03.580 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:03.839 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:04:04.098 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:04:04.098 [673/710] Linking static target lib/librte_node.a 00:04:04.358 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.617 [675/710] Linking target lib/librte_node.so.24.0 00:04:04.617 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:05.554 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:05.812 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:05.812 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:07.188 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:08.123 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:13.386 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:52.085 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:52.085 [684/710] Linking static target lib/librte_vhost.a 00:04:52.085 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:52.085 [686/710] Linking target lib/librte_vhost.so.24.0 00:04:58.647 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:58.647 [688/710] Linking static target lib/librte_pipeline.a 00:04:58.904 [689/710] Linking target app/dpdk-dumpcap 00:04:58.904 [690/710] Linking target app/dpdk-test-acl 00:04:58.904 [691/710] Linking target app/dpdk-test-cmdline 00:04:58.904 [692/710] Linking target app/dpdk-proc-info 00:04:58.904 [693/710] Linking target app/dpdk-test-sad 00:04:58.904 [694/710] Linking target app/dpdk-test-gpudev 00:04:58.904 [695/710] Linking target app/dpdk-pdump 00:04:58.904 [696/710] Linking target app/dpdk-test-flow-perf 00:04:58.904 [697/710] Linking target app/dpdk-test-crypto-perf 00:04:58.904 [698/710] Linking target app/dpdk-test-dma-perf 00:04:58.904 [699/710] Linking target app/dpdk-test-pipeline 00:04:58.904 [700/710] Linking target app/dpdk-test-regex 00:04:58.904 [701/710] Linking target app/dpdk-test-fib 00:04:58.904 [702/710] Linking target app/dpdk-test-security-perf 00:04:58.904 [703/710] Linking target app/dpdk-graph 00:04:58.904 [704/710] Linking target app/dpdk-test-bbdev 00:04:58.904 [705/710] Linking target app/dpdk-test-eventdev 00:04:58.904 [706/710] Linking target app/dpdk-test-compress-perf 00:04:58.904 [707/710] Linking target app/dpdk-test-mldev 00:04:58.904 [708/710] Linking target app/dpdk-testpmd 00:05:01.008 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.266 [710/710] Linking target lib/librte_pipeline.so.24.0 00:05:01.266 12:19:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:05:01.266 12:19:30 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:05:01.266 12:19:30 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:05:01.266 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:05:01.266 [0/1] Installing files. 00:05:01.527 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.527 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.528 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:05:01.529 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.530 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:05:01.531 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:05:01.794 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:01.795 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:05:01.796 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:05:01.796 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:01.796 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:05:02.368 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:05:02.368 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:05:02.368 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.368 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:05:02.368 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.368 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.369 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.370 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.371 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:05:02.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:05:02.372 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:05:02.372 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:05:02.372 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:05:02.372 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:05:02.372 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:05:02.372 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:05:02.372 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:05:02.372 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:05:02.372 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:05:02.372 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:05:02.372 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:05:02.372 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:05:02.373 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:05:02.373 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:05:02.373 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:05:02.373 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:05:02.373 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:05:02.373 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:05:02.373 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:05:02.373 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:05:02.373 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:05:02.373 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:05:02.373 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:05:02.373 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:05:02.373 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:05:02.373 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:05:02.373 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:05:02.373 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:05:02.373 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:05:02.373 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:05:02.373 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:05:02.373 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:05:02.373 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:05:02.373 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:05:02.373 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:05:02.373 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:05:02.373 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:05:02.373 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:05:02.373 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:05:02.373 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:05:02.373 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:05:02.373 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:05:02.373 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:05:02.373 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:05:02.373 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:05:02.373 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:05:02.373 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:05:02.373 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:05:02.373 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:05:02.373 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:05:02.373 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:05:02.373 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:05:02.373 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:05:02.373 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:05:02.373 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:05:02.373 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:05:02.373 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:05:02.373 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:05:02.373 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:05:02.373 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:05:02.373 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:05:02.373 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:05:02.373 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:05:02.373 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:05:02.373 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:05:02.373 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:05:02.373 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:05:02.373 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:05:02.373 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:05:02.373 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:05:02.373 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:05:02.373 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:05:02.373 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:05:02.373 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:05:02.373 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:05:02.373 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:05:02.373 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:05:02.373 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:05:02.373 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:05:02.373 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:05:02.373 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:05:02.373 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:05:02.373 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:05:02.373 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:05:02.373 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:05:02.373 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:05:02.373 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:05:02.373 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:05:02.373 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:05:02.373 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:05:02.373 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:05:02.373 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:05:02.373 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:05:02.373 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:05:02.373 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:05:02.373 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:05:02.373 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:05:02.373 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:05:02.373 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:05:02.373 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:05:02.373 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:05:02.373 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:05:02.373 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:05:02.373 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:05:02.373 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:05:02.373 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:05:02.373 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:05:02.373 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:05:02.373 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:05:02.373 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:05:02.374 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:05:02.374 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:05:02.374 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:05:02.374 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:05:02.374 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:05:02.374 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:05:02.374 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:05:02.374 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:05:02.374 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:05:02.374 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:05:02.374 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:05:02.374 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:05:02.374 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:05:02.374 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:05:02.374 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:05:02.374 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:02.374 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:05:02.374 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:02.374 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:05:02.374 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:02.374 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:05:02.374 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:02.374 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:05:02.374 12:19:31 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:05:02.374 12:19:31 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:02.374 00:05:02.374 real 1m29.423s 00:05:02.374 user 18m8.299s 00:05:02.374 sys 2m8.988s 00:05:02.374 12:19:31 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:02.374 12:19:31 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:05:02.374 ************************************ 00:05:02.374 END TEST build_native_dpdk 00:05:02.374 ************************************ 00:05:02.633 12:19:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:02.633 12:19:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:02.633 12:19:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:02.633 12:19:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:02.633 12:19:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:02.633 12:19:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:02.633 12:19:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:02.633 12:19:31 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:05:02.633 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:05:02.633 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:05:02.633 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:05:02.633 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:02.891 Using 'verbs' RDMA provider 00:05:13.440 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:23.425 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:23.425 Creating mk/config.mk...done. 00:05:23.425 Creating mk/cc.flags.mk...done. 00:05:23.425 Type 'make' to build. 00:05:23.425 12:19:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:05:23.425 12:19:51 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:23.425 12:19:51 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:23.425 12:19:51 -- common/autotest_common.sh@10 -- $ set +x 00:05:23.425 ************************************ 00:05:23.425 START TEST make 00:05:23.425 ************************************ 00:05:23.425 12:19:51 make -- common/autotest_common.sh@1127 -- $ make -j48 00:05:23.425 make[1]: Nothing to be done for 'all'. 00:05:24.813 The Meson build system 00:05:24.813 Version: 1.5.0 00:05:24.813 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:24.813 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:24.813 Build type: native build 00:05:24.813 Project name: libvfio-user 00:05:24.813 Project version: 0.0.1 00:05:24.813 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:24.813 C linker for the host machine: gcc ld.bfd 2.40-14 00:05:24.813 Host machine cpu family: x86_64 00:05:24.813 Host machine cpu: x86_64 00:05:24.813 Run-time dependency threads found: YES 00:05:24.813 Library dl found: YES 00:05:24.813 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:24.813 Run-time dependency json-c found: YES 0.17 00:05:24.813 Run-time dependency cmocka found: YES 1.1.7 00:05:24.813 Program pytest-3 found: NO 00:05:24.813 Program flake8 found: NO 00:05:24.813 Program misspell-fixer found: NO 00:05:24.813 Program restructuredtext-lint found: NO 00:05:24.813 Program valgrind found: YES (/usr/bin/valgrind) 00:05:24.813 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:24.813 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:24.813 Compiler for C supports arguments -Wwrite-strings: YES 00:05:24.813 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:24.813 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:24.813 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:24.813 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:24.813 Build targets in project: 8 00:05:24.813 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:24.813 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:24.813 00:05:24.813 libvfio-user 0.0.1 00:05:24.813 00:05:24.813 User defined options 00:05:24.813 buildtype : debug 00:05:24.813 default_library: shared 00:05:24.813 libdir : /usr/local/lib 00:05:24.813 00:05:24.813 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:25.767 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:25.767 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:25.767 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:25.767 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:25.767 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:25.767 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:25.767 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:25.767 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:25.767 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:25.767 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:25.767 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:26.030 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:26.030 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:26.030 [13/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:26.030 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:26.030 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:26.030 [16/37] Compiling C object samples/null.p/null.c.o 00:05:26.030 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:26.030 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:26.030 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:26.030 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:26.030 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:26.030 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:26.030 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:26.030 [24/37] Compiling C object samples/server.p/server.c.o 00:05:26.030 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:26.030 [26/37] Compiling C object samples/client.p/client.c.o 00:05:26.030 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:26.030 [28/37] Linking target samples/client 00:05:26.030 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:26.030 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:26.290 [31/37] Linking target test/unit_tests 00:05:26.290 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:26.290 [33/37] Linking target samples/server 00:05:26.290 [34/37] Linking target samples/gpio-pci-idio-16 00:05:26.290 [35/37] Linking target samples/null 00:05:26.290 [36/37] Linking target samples/lspci 00:05:26.290 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:26.290 INFO: autodetecting backend as ninja 00:05:26.290 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:26.552 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:27.123 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:27.123 ninja: no work to do. 00:06:05.830 CC lib/log/log.o 00:06:05.830 CC lib/ut_mock/mock.o 00:06:05.830 CC lib/log/log_flags.o 00:06:05.830 CC lib/log/log_deprecated.o 00:06:05.830 CC lib/ut/ut.o 00:06:05.830 LIB libspdk_ut.a 00:06:05.830 LIB libspdk_ut_mock.a 00:06:05.830 LIB libspdk_log.a 00:06:05.830 SO libspdk_ut.so.2.0 00:06:05.830 SO libspdk_ut_mock.so.6.0 00:06:05.830 SO libspdk_log.so.7.1 00:06:05.830 SYMLINK libspdk_ut.so 00:06:05.830 SYMLINK libspdk_ut_mock.so 00:06:05.830 SYMLINK libspdk_log.so 00:06:05.830 CC lib/dma/dma.o 00:06:05.830 CC lib/util/base64.o 00:06:05.830 CC lib/util/bit_array.o 00:06:05.830 CC lib/ioat/ioat.o 00:06:05.830 CXX lib/trace_parser/trace.o 00:06:05.830 CC lib/util/cpuset.o 00:06:05.830 CC lib/util/crc16.o 00:06:05.830 CC lib/util/crc32.o 00:06:05.830 CC lib/util/crc32c.o 00:06:05.830 CC lib/util/crc32_ieee.o 00:06:05.830 CC lib/util/crc64.o 00:06:05.830 CC lib/util/dif.o 00:06:05.830 CC lib/util/fd.o 00:06:05.830 CC lib/util/fd_group.o 00:06:05.830 CC lib/util/file.o 00:06:05.830 CC lib/util/hexlify.o 00:06:05.830 CC lib/util/iov.o 00:06:05.830 CC lib/util/math.o 00:06:05.830 CC lib/util/net.o 00:06:05.830 CC lib/util/pipe.o 00:06:05.830 CC lib/util/strerror_tls.o 00:06:05.830 CC lib/util/string.o 00:06:05.830 CC lib/util/uuid.o 00:06:05.830 CC lib/util/xor.o 00:06:05.830 CC lib/util/zipf.o 00:06:05.830 CC lib/util/md5.o 00:06:05.830 CC lib/vfio_user/host/vfio_user_pci.o 00:06:05.830 CC lib/vfio_user/host/vfio_user.o 00:06:05.830 LIB libspdk_dma.a 00:06:05.830 SO libspdk_dma.so.5.0 00:06:05.830 SYMLINK libspdk_dma.so 00:06:05.830 LIB libspdk_ioat.a 00:06:05.830 SO libspdk_ioat.so.7.0 00:06:05.830 SYMLINK libspdk_ioat.so 00:06:05.830 LIB libspdk_vfio_user.a 00:06:05.830 SO libspdk_vfio_user.so.5.0 00:06:05.830 SYMLINK libspdk_vfio_user.so 00:06:05.830 LIB libspdk_util.a 00:06:05.830 SO libspdk_util.so.10.1 00:06:05.830 SYMLINK libspdk_util.so 00:06:05.830 LIB libspdk_trace_parser.a 00:06:05.830 SO libspdk_trace_parser.so.6.0 00:06:05.830 CC lib/rdma_provider/common.o 00:06:05.830 CC lib/conf/conf.o 00:06:05.830 CC lib/vmd/vmd.o 00:06:05.830 CC lib/rdma_utils/rdma_utils.o 00:06:05.830 CC lib/idxd/idxd.o 00:06:05.830 CC lib/json/json_parse.o 00:06:05.830 CC lib/env_dpdk/env.o 00:06:05.830 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:05.830 CC lib/vmd/led.o 00:06:05.830 CC lib/json/json_util.o 00:06:05.830 CC lib/idxd/idxd_user.o 00:06:05.830 CC lib/env_dpdk/memory.o 00:06:05.830 CC lib/json/json_write.o 00:06:05.830 CC lib/idxd/idxd_kernel.o 00:06:05.830 CC lib/env_dpdk/pci.o 00:06:05.830 CC lib/env_dpdk/init.o 00:06:05.830 CC lib/env_dpdk/threads.o 00:06:05.830 CC lib/env_dpdk/pci_ioat.o 00:06:05.830 CC lib/env_dpdk/pci_virtio.o 00:06:05.830 CC lib/env_dpdk/pci_vmd.o 00:06:05.830 CC lib/env_dpdk/pci_idxd.o 00:06:05.830 CC lib/env_dpdk/pci_event.o 00:06:05.830 CC lib/env_dpdk/sigbus_handler.o 00:06:05.830 CC lib/env_dpdk/pci_dpdk.o 00:06:05.830 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:05.830 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:05.830 SYMLINK libspdk_trace_parser.so 00:06:05.830 LIB libspdk_conf.a 00:06:05.830 SO libspdk_conf.so.6.0 00:06:05.830 LIB libspdk_rdma_provider.a 00:06:05.830 LIB libspdk_rdma_utils.a 00:06:05.830 LIB libspdk_json.a 00:06:05.830 SO libspdk_rdma_provider.so.6.0 00:06:05.830 SYMLINK libspdk_conf.so 00:06:05.830 SO libspdk_rdma_utils.so.1.0 00:06:05.830 SO libspdk_json.so.6.0 00:06:05.830 SYMLINK libspdk_rdma_provider.so 00:06:05.830 SYMLINK libspdk_rdma_utils.so 00:06:05.830 SYMLINK libspdk_json.so 00:06:05.830 CC lib/jsonrpc/jsonrpc_server.o 00:06:05.830 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:05.830 CC lib/jsonrpc/jsonrpc_client.o 00:06:05.830 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:05.830 LIB libspdk_idxd.a 00:06:05.830 SO libspdk_idxd.so.12.1 00:06:05.830 LIB libspdk_vmd.a 00:06:05.830 SYMLINK libspdk_idxd.so 00:06:05.830 SO libspdk_vmd.so.6.0 00:06:05.830 SYMLINK libspdk_vmd.so 00:06:05.830 LIB libspdk_jsonrpc.a 00:06:05.830 SO libspdk_jsonrpc.so.6.0 00:06:05.830 SYMLINK libspdk_jsonrpc.so 00:06:05.830 CC lib/rpc/rpc.o 00:06:05.830 LIB libspdk_rpc.a 00:06:05.830 SO libspdk_rpc.so.6.0 00:06:05.830 SYMLINK libspdk_rpc.so 00:06:05.830 CC lib/trace/trace.o 00:06:05.830 CC lib/keyring/keyring.o 00:06:05.830 CC lib/trace/trace_flags.o 00:06:05.830 CC lib/keyring/keyring_rpc.o 00:06:05.830 CC lib/notify/notify.o 00:06:05.830 CC lib/trace/trace_rpc.o 00:06:05.831 CC lib/notify/notify_rpc.o 00:06:05.831 LIB libspdk_notify.a 00:06:05.831 SO libspdk_notify.so.6.0 00:06:05.831 SYMLINK libspdk_notify.so 00:06:05.831 LIB libspdk_keyring.a 00:06:05.831 LIB libspdk_trace.a 00:06:05.831 SO libspdk_keyring.so.2.0 00:06:05.831 SO libspdk_trace.so.11.0 00:06:06.088 SYMLINK libspdk_keyring.so 00:06:06.088 SYMLINK libspdk_trace.so 00:06:06.088 LIB libspdk_env_dpdk.a 00:06:06.088 CC lib/thread/thread.o 00:06:06.088 CC lib/thread/iobuf.o 00:06:06.088 CC lib/sock/sock.o 00:06:06.088 CC lib/sock/sock_rpc.o 00:06:06.346 SO libspdk_env_dpdk.so.15.1 00:06:06.346 SYMLINK libspdk_env_dpdk.so 00:06:06.604 LIB libspdk_sock.a 00:06:06.604 SO libspdk_sock.so.10.0 00:06:06.604 SYMLINK libspdk_sock.so 00:06:06.863 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:06.863 CC lib/nvme/nvme_ctrlr.o 00:06:06.863 CC lib/nvme/nvme_fabric.o 00:06:06.863 CC lib/nvme/nvme_ns_cmd.o 00:06:06.863 CC lib/nvme/nvme_ns.o 00:06:06.863 CC lib/nvme/nvme_pcie_common.o 00:06:06.863 CC lib/nvme/nvme_pcie.o 00:06:06.863 CC lib/nvme/nvme_qpair.o 00:06:06.863 CC lib/nvme/nvme.o 00:06:06.863 CC lib/nvme/nvme_quirks.o 00:06:06.863 CC lib/nvme/nvme_transport.o 00:06:06.863 CC lib/nvme/nvme_discovery.o 00:06:06.863 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:06.863 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:06.863 CC lib/nvme/nvme_tcp.o 00:06:06.863 CC lib/nvme/nvme_opal.o 00:06:06.863 CC lib/nvme/nvme_io_msg.o 00:06:06.863 CC lib/nvme/nvme_poll_group.o 00:06:06.863 CC lib/nvme/nvme_zns.o 00:06:06.863 CC lib/nvme/nvme_stubs.o 00:06:06.863 CC lib/nvme/nvme_auth.o 00:06:06.863 CC lib/nvme/nvme_cuse.o 00:06:06.863 CC lib/nvme/nvme_vfio_user.o 00:06:06.863 CC lib/nvme/nvme_rdma.o 00:06:07.798 LIB libspdk_thread.a 00:06:07.798 SO libspdk_thread.so.11.0 00:06:07.798 SYMLINK libspdk_thread.so 00:06:08.056 CC lib/fsdev/fsdev.o 00:06:08.056 CC lib/accel/accel.o 00:06:08.056 CC lib/init/json_config.o 00:06:08.056 CC lib/vfu_tgt/tgt_endpoint.o 00:06:08.056 CC lib/virtio/virtio.o 00:06:08.056 CC lib/fsdev/fsdev_io.o 00:06:08.056 CC lib/init/subsystem.o 00:06:08.056 CC lib/vfu_tgt/tgt_rpc.o 00:06:08.056 CC lib/blob/blobstore.o 00:06:08.056 CC lib/accel/accel_rpc.o 00:06:08.056 CC lib/accel/accel_sw.o 00:06:08.056 CC lib/virtio/virtio_vhost_user.o 00:06:08.056 CC lib/init/subsystem_rpc.o 00:06:08.056 CC lib/blob/request.o 00:06:08.056 CC lib/fsdev/fsdev_rpc.o 00:06:08.056 CC lib/virtio/virtio_vfio_user.o 00:06:08.056 CC lib/blob/zeroes.o 00:06:08.056 CC lib/init/rpc.o 00:06:08.056 CC lib/blob/blob_bs_dev.o 00:06:08.056 CC lib/virtio/virtio_pci.o 00:06:08.315 LIB libspdk_init.a 00:06:08.315 LIB libspdk_vfu_tgt.a 00:06:08.315 SO libspdk_vfu_tgt.so.3.0 00:06:08.315 SO libspdk_init.so.6.0 00:06:08.573 LIB libspdk_virtio.a 00:06:08.573 SYMLINK libspdk_vfu_tgt.so 00:06:08.573 SYMLINK libspdk_init.so 00:06:08.573 SO libspdk_virtio.so.7.0 00:06:08.573 SYMLINK libspdk_virtio.so 00:06:08.573 CC lib/event/app.o 00:06:08.573 CC lib/event/reactor.o 00:06:08.573 CC lib/event/log_rpc.o 00:06:08.573 CC lib/event/app_rpc.o 00:06:08.573 CC lib/event/scheduler_static.o 00:06:08.831 LIB libspdk_fsdev.a 00:06:08.831 SO libspdk_fsdev.so.2.0 00:06:08.831 SYMLINK libspdk_fsdev.so 00:06:08.831 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:09.089 LIB libspdk_event.a 00:06:09.089 SO libspdk_event.so.14.0 00:06:09.089 SYMLINK libspdk_event.so 00:06:09.347 LIB libspdk_accel.a 00:06:09.347 SO libspdk_accel.so.16.0 00:06:09.347 SYMLINK libspdk_accel.so 00:06:09.347 LIB libspdk_nvme.a 00:06:09.347 SO libspdk_nvme.so.15.0 00:06:09.347 CC lib/bdev/bdev.o 00:06:09.347 CC lib/bdev/bdev_rpc.o 00:06:09.347 CC lib/bdev/bdev_zone.o 00:06:09.347 CC lib/bdev/part.o 00:06:09.347 CC lib/bdev/scsi_nvme.o 00:06:09.605 LIB libspdk_fuse_dispatcher.a 00:06:09.605 SO libspdk_fuse_dispatcher.so.1.0 00:06:09.605 SYMLINK libspdk_nvme.so 00:06:09.605 SYMLINK libspdk_fuse_dispatcher.so 00:06:11.505 LIB libspdk_blob.a 00:06:11.505 SO libspdk_blob.so.11.0 00:06:11.505 SYMLINK libspdk_blob.so 00:06:11.505 CC lib/blobfs/blobfs.o 00:06:11.505 CC lib/blobfs/tree.o 00:06:11.505 CC lib/lvol/lvol.o 00:06:12.071 LIB libspdk_bdev.a 00:06:12.071 SO libspdk_bdev.so.17.0 00:06:12.071 SYMLINK libspdk_bdev.so 00:06:12.337 LIB libspdk_blobfs.a 00:06:12.337 SO libspdk_blobfs.so.10.0 00:06:12.337 CC lib/ublk/ublk.o 00:06:12.337 CC lib/nbd/nbd.o 00:06:12.337 CC lib/scsi/dev.o 00:06:12.337 CC lib/ublk/ublk_rpc.o 00:06:12.337 CC lib/nbd/nbd_rpc.o 00:06:12.337 CC lib/scsi/lun.o 00:06:12.337 CC lib/nvmf/ctrlr.o 00:06:12.337 CC lib/ftl/ftl_core.o 00:06:12.337 CC lib/scsi/port.o 00:06:12.337 CC lib/nvmf/ctrlr_discovery.o 00:06:12.337 CC lib/ftl/ftl_init.o 00:06:12.337 CC lib/scsi/scsi.o 00:06:12.337 CC lib/scsi/scsi_bdev.o 00:06:12.337 CC lib/ftl/ftl_layout.o 00:06:12.337 CC lib/nvmf/ctrlr_bdev.o 00:06:12.337 CC lib/scsi/scsi_pr.o 00:06:12.337 CC lib/ftl/ftl_debug.o 00:06:12.337 CC lib/scsi/scsi_rpc.o 00:06:12.337 CC lib/nvmf/subsystem.o 00:06:12.337 CC lib/nvmf/nvmf.o 00:06:12.337 CC lib/ftl/ftl_io.o 00:06:12.337 CC lib/scsi/task.o 00:06:12.337 CC lib/ftl/ftl_sb.o 00:06:12.337 CC lib/ftl/ftl_l2p.o 00:06:12.337 CC lib/nvmf/nvmf_rpc.o 00:06:12.337 CC lib/nvmf/transport.o 00:06:12.337 CC lib/ftl/ftl_l2p_flat.o 00:06:12.337 CC lib/nvmf/tcp.o 00:06:12.337 CC lib/ftl/ftl_nv_cache.o 00:06:12.337 CC lib/nvmf/stubs.o 00:06:12.337 CC lib/ftl/ftl_band.o 00:06:12.337 CC lib/nvmf/mdns_server.o 00:06:12.337 CC lib/ftl/ftl_band_ops.o 00:06:12.337 CC lib/nvmf/vfio_user.o 00:06:12.337 CC lib/ftl/ftl_writer.o 00:06:12.337 CC lib/nvmf/rdma.o 00:06:12.337 CC lib/ftl/ftl_rq.o 00:06:12.337 CC lib/nvmf/auth.o 00:06:12.337 CC lib/ftl/ftl_reloc.o 00:06:12.337 CC lib/ftl/ftl_l2p_cache.o 00:06:12.337 CC lib/ftl/ftl_p2l.o 00:06:12.337 CC lib/ftl/ftl_p2l_log.o 00:06:12.337 CC lib/ftl/mngt/ftl_mngt.o 00:06:12.337 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:12.337 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:12.337 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:12.337 SYMLINK libspdk_blobfs.so 00:06:12.337 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:12.337 LIB libspdk_lvol.a 00:06:12.337 SO libspdk_lvol.so.10.0 00:06:12.597 SYMLINK libspdk_lvol.so 00:06:12.597 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:12.597 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:12.858 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:12.858 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:12.858 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:12.858 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:12.858 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:12.858 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:12.858 CC lib/ftl/utils/ftl_conf.o 00:06:12.858 CC lib/ftl/utils/ftl_md.o 00:06:12.858 CC lib/ftl/utils/ftl_mempool.o 00:06:12.858 CC lib/ftl/utils/ftl_bitmap.o 00:06:12.858 CC lib/ftl/utils/ftl_property.o 00:06:12.858 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:12.858 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:12.858 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:12.858 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:12.858 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:12.858 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:13.117 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:13.117 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:13.117 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:13.117 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:13.117 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:13.117 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:13.117 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:13.117 CC lib/ftl/base/ftl_base_dev.o 00:06:13.117 CC lib/ftl/base/ftl_base_bdev.o 00:06:13.117 CC lib/ftl/ftl_trace.o 00:06:13.117 LIB libspdk_nbd.a 00:06:13.117 SO libspdk_nbd.so.7.0 00:06:13.376 SYMLINK libspdk_nbd.so 00:06:13.376 LIB libspdk_scsi.a 00:06:13.376 SO libspdk_scsi.so.9.0 00:06:13.376 SYMLINK libspdk_scsi.so 00:06:13.634 LIB libspdk_ublk.a 00:06:13.634 SO libspdk_ublk.so.3.0 00:06:13.634 CC lib/vhost/vhost.o 00:06:13.634 CC lib/iscsi/conn.o 00:06:13.634 CC lib/vhost/vhost_rpc.o 00:06:13.634 CC lib/iscsi/init_grp.o 00:06:13.634 CC lib/vhost/vhost_scsi.o 00:06:13.634 CC lib/iscsi/param.o 00:06:13.634 CC lib/iscsi/iscsi.o 00:06:13.634 CC lib/vhost/vhost_blk.o 00:06:13.634 CC lib/vhost/rte_vhost_user.o 00:06:13.634 CC lib/iscsi/portal_grp.o 00:06:13.634 CC lib/iscsi/tgt_node.o 00:06:13.634 CC lib/iscsi/iscsi_subsystem.o 00:06:13.634 CC lib/iscsi/iscsi_rpc.o 00:06:13.634 CC lib/iscsi/task.o 00:06:13.634 SYMLINK libspdk_ublk.so 00:06:13.893 LIB libspdk_ftl.a 00:06:14.151 SO libspdk_ftl.so.9.0 00:06:14.410 SYMLINK libspdk_ftl.so 00:06:14.977 LIB libspdk_vhost.a 00:06:14.977 SO libspdk_vhost.so.8.0 00:06:14.977 SYMLINK libspdk_vhost.so 00:06:14.977 LIB libspdk_nvmf.a 00:06:14.977 LIB libspdk_iscsi.a 00:06:14.977 SO libspdk_nvmf.so.20.0 00:06:15.235 SO libspdk_iscsi.so.8.0 00:06:15.235 SYMLINK libspdk_iscsi.so 00:06:15.235 SYMLINK libspdk_nvmf.so 00:06:15.493 CC module/env_dpdk/env_dpdk_rpc.o 00:06:15.493 CC module/vfu_device/vfu_virtio.o 00:06:15.493 CC module/vfu_device/vfu_virtio_blk.o 00:06:15.493 CC module/vfu_device/vfu_virtio_scsi.o 00:06:15.493 CC module/vfu_device/vfu_virtio_rpc.o 00:06:15.493 CC module/vfu_device/vfu_virtio_fs.o 00:06:15.751 CC module/keyring/file/keyring.o 00:06:15.751 CC module/keyring/file/keyring_rpc.o 00:06:15.751 CC module/accel/error/accel_error.o 00:06:15.751 CC module/keyring/linux/keyring.o 00:06:15.751 CC module/keyring/linux/keyring_rpc.o 00:06:15.751 CC module/accel/error/accel_error_rpc.o 00:06:15.751 CC module/accel/ioat/accel_ioat.o 00:06:15.751 CC module/accel/dsa/accel_dsa.o 00:06:15.751 CC module/accel/ioat/accel_ioat_rpc.o 00:06:15.751 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:15.751 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:15.751 CC module/accel/dsa/accel_dsa_rpc.o 00:06:15.751 CC module/accel/iaa/accel_iaa.o 00:06:15.751 CC module/scheduler/gscheduler/gscheduler.o 00:06:15.751 CC module/accel/iaa/accel_iaa_rpc.o 00:06:15.751 CC module/sock/posix/posix.o 00:06:15.751 CC module/blob/bdev/blob_bdev.o 00:06:15.751 CC module/fsdev/aio/fsdev_aio.o 00:06:15.751 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:15.751 CC module/fsdev/aio/linux_aio_mgr.o 00:06:15.751 LIB libspdk_env_dpdk_rpc.a 00:06:15.751 SO libspdk_env_dpdk_rpc.so.6.0 00:06:15.751 SYMLINK libspdk_env_dpdk_rpc.so 00:06:15.751 LIB libspdk_scheduler_gscheduler.a 00:06:15.751 SO libspdk_scheduler_gscheduler.so.4.0 00:06:15.751 LIB libspdk_accel_ioat.a 00:06:16.010 LIB libspdk_accel_iaa.a 00:06:16.010 SO libspdk_accel_ioat.so.6.0 00:06:16.010 LIB libspdk_keyring_linux.a 00:06:16.010 SYMLINK libspdk_scheduler_gscheduler.so 00:06:16.010 LIB libspdk_keyring_file.a 00:06:16.010 LIB libspdk_scheduler_dynamic.a 00:06:16.010 SO libspdk_accel_iaa.so.3.0 00:06:16.010 LIB libspdk_scheduler_dpdk_governor.a 00:06:16.010 SO libspdk_keyring_linux.so.1.0 00:06:16.010 SO libspdk_keyring_file.so.2.0 00:06:16.010 SO libspdk_scheduler_dynamic.so.4.0 00:06:16.010 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:16.010 LIB libspdk_accel_error.a 00:06:16.010 SYMLINK libspdk_accel_ioat.so 00:06:16.010 LIB libspdk_blob_bdev.a 00:06:16.010 SO libspdk_accel_error.so.2.0 00:06:16.010 LIB libspdk_accel_dsa.a 00:06:16.010 SYMLINK libspdk_accel_iaa.so 00:06:16.010 SYMLINK libspdk_keyring_linux.so 00:06:16.010 SYMLINK libspdk_keyring_file.so 00:06:16.010 SYMLINK libspdk_scheduler_dynamic.so 00:06:16.010 SO libspdk_blob_bdev.so.11.0 00:06:16.010 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:16.010 SO libspdk_accel_dsa.so.5.0 00:06:16.010 SYMLINK libspdk_accel_error.so 00:06:16.010 SYMLINK libspdk_blob_bdev.so 00:06:16.010 SYMLINK libspdk_accel_dsa.so 00:06:16.272 LIB libspdk_vfu_device.a 00:06:16.272 SO libspdk_vfu_device.so.3.0 00:06:16.272 CC module/bdev/gpt/gpt.o 00:06:16.272 CC module/bdev/lvol/vbdev_lvol.o 00:06:16.272 CC module/bdev/gpt/vbdev_gpt.o 00:06:16.272 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:16.272 CC module/bdev/error/vbdev_error.o 00:06:16.272 CC module/bdev/split/vbdev_split.o 00:06:16.272 CC module/bdev/null/bdev_null.o 00:06:16.272 CC module/bdev/delay/vbdev_delay.o 00:06:16.272 CC module/bdev/nvme/bdev_nvme.o 00:06:16.272 CC module/bdev/split/vbdev_split_rpc.o 00:06:16.272 CC module/bdev/null/bdev_null_rpc.o 00:06:16.272 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:16.272 CC module/bdev/error/vbdev_error_rpc.o 00:06:16.272 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:16.272 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:16.272 CC module/blobfs/bdev/blobfs_bdev.o 00:06:16.272 CC module/bdev/passthru/vbdev_passthru.o 00:06:16.272 CC module/bdev/nvme/nvme_rpc.o 00:06:16.272 CC module/bdev/raid/bdev_raid.o 00:06:16.272 CC module/bdev/aio/bdev_aio.o 00:06:16.272 CC module/bdev/ftl/bdev_ftl.o 00:06:16.272 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:16.272 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:16.272 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:16.272 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:16.272 CC module/bdev/nvme/bdev_mdns_client.o 00:06:16.272 CC module/bdev/aio/bdev_aio_rpc.o 00:06:16.272 CC module/bdev/raid/bdev_raid_rpc.o 00:06:16.272 CC module/bdev/nvme/vbdev_opal.o 00:06:16.272 CC module/bdev/malloc/bdev_malloc.o 00:06:16.272 CC module/bdev/raid/bdev_raid_sb.o 00:06:16.272 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:16.272 CC module/bdev/raid/raid0.o 00:06:16.272 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:16.272 CC module/bdev/iscsi/bdev_iscsi.o 00:06:16.272 CC module/bdev/raid/concat.o 00:06:16.272 CC module/bdev/raid/raid1.o 00:06:16.272 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:16.272 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:16.272 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:16.272 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:16.272 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:16.572 SYMLINK libspdk_vfu_device.so 00:06:16.572 LIB libspdk_sock_posix.a 00:06:16.572 SO libspdk_sock_posix.so.6.0 00:06:16.572 LIB libspdk_fsdev_aio.a 00:06:16.572 SO libspdk_fsdev_aio.so.1.0 00:06:16.572 SYMLINK libspdk_fsdev_aio.so 00:06:16.572 SYMLINK libspdk_sock_posix.so 00:06:16.927 LIB libspdk_blobfs_bdev.a 00:06:16.927 SO libspdk_blobfs_bdev.so.6.0 00:06:16.927 LIB libspdk_bdev_split.a 00:06:16.927 SYMLINK libspdk_blobfs_bdev.so 00:06:16.927 LIB libspdk_bdev_gpt.a 00:06:16.927 LIB libspdk_bdev_ftl.a 00:06:16.927 SO libspdk_bdev_split.so.6.0 00:06:16.927 SO libspdk_bdev_gpt.so.6.0 00:06:16.927 SO libspdk_bdev_ftl.so.6.0 00:06:16.927 LIB libspdk_bdev_null.a 00:06:16.927 LIB libspdk_bdev_passthru.a 00:06:16.927 LIB libspdk_bdev_error.a 00:06:16.927 SO libspdk_bdev_null.so.6.0 00:06:16.927 SO libspdk_bdev_passthru.so.6.0 00:06:16.927 SO libspdk_bdev_error.so.6.0 00:06:16.927 SYMLINK libspdk_bdev_split.so 00:06:16.927 SYMLINK libspdk_bdev_gpt.so 00:06:16.927 SYMLINK libspdk_bdev_ftl.so 00:06:16.927 SYMLINK libspdk_bdev_null.so 00:06:16.927 SYMLINK libspdk_bdev_passthru.so 00:06:16.927 SYMLINK libspdk_bdev_error.so 00:06:16.927 LIB libspdk_bdev_aio.a 00:06:16.927 LIB libspdk_bdev_zone_block.a 00:06:16.927 LIB libspdk_bdev_delay.a 00:06:16.927 SO libspdk_bdev_aio.so.6.0 00:06:16.927 LIB libspdk_bdev_malloc.a 00:06:16.927 LIB libspdk_bdev_iscsi.a 00:06:16.927 SO libspdk_bdev_zone_block.so.6.0 00:06:16.927 SO libspdk_bdev_delay.so.6.0 00:06:16.927 SO libspdk_bdev_malloc.so.6.0 00:06:16.927 SO libspdk_bdev_iscsi.so.6.0 00:06:16.927 LIB libspdk_bdev_lvol.a 00:06:16.927 LIB libspdk_bdev_virtio.a 00:06:16.927 SYMLINK libspdk_bdev_aio.so 00:06:16.927 SO libspdk_bdev_lvol.so.6.0 00:06:16.927 SYMLINK libspdk_bdev_zone_block.so 00:06:17.224 SYMLINK libspdk_bdev_delay.so 00:06:17.224 SYMLINK libspdk_bdev_malloc.so 00:06:17.224 SO libspdk_bdev_virtio.so.6.0 00:06:17.224 SYMLINK libspdk_bdev_iscsi.so 00:06:17.224 SYMLINK libspdk_bdev_lvol.so 00:06:17.224 SYMLINK libspdk_bdev_virtio.so 00:06:17.483 LIB libspdk_bdev_raid.a 00:06:17.741 SO libspdk_bdev_raid.so.6.0 00:06:17.741 SYMLINK libspdk_bdev_raid.so 00:06:19.116 LIB libspdk_bdev_nvme.a 00:06:19.116 SO libspdk_bdev_nvme.so.7.1 00:06:19.116 SYMLINK libspdk_bdev_nvme.so 00:06:19.684 CC module/event/subsystems/keyring/keyring.o 00:06:19.684 CC module/event/subsystems/sock/sock.o 00:06:19.684 CC module/event/subsystems/vmd/vmd.o 00:06:19.684 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:19.684 CC module/event/subsystems/iobuf/iobuf.o 00:06:19.684 CC module/event/subsystems/fsdev/fsdev.o 00:06:19.684 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:19.684 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:19.684 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:19.684 CC module/event/subsystems/scheduler/scheduler.o 00:06:19.684 LIB libspdk_event_keyring.a 00:06:19.684 LIB libspdk_event_fsdev.a 00:06:19.684 LIB libspdk_event_vhost_blk.a 00:06:19.684 LIB libspdk_event_scheduler.a 00:06:19.684 LIB libspdk_event_sock.a 00:06:19.684 LIB libspdk_event_vfu_tgt.a 00:06:19.684 LIB libspdk_event_vmd.a 00:06:19.684 SO libspdk_event_keyring.so.1.0 00:06:19.684 LIB libspdk_event_iobuf.a 00:06:19.684 SO libspdk_event_fsdev.so.1.0 00:06:19.684 SO libspdk_event_vhost_blk.so.3.0 00:06:19.684 SO libspdk_event_scheduler.so.4.0 00:06:19.684 SO libspdk_event_sock.so.5.0 00:06:19.684 SO libspdk_event_vfu_tgt.so.3.0 00:06:19.684 SO libspdk_event_vmd.so.6.0 00:06:19.684 SO libspdk_event_iobuf.so.3.0 00:06:19.684 SYMLINK libspdk_event_keyring.so 00:06:19.684 SYMLINK libspdk_event_fsdev.so 00:06:19.684 SYMLINK libspdk_event_vhost_blk.so 00:06:19.684 SYMLINK libspdk_event_sock.so 00:06:19.684 SYMLINK libspdk_event_scheduler.so 00:06:19.684 SYMLINK libspdk_event_vfu_tgt.so 00:06:19.684 SYMLINK libspdk_event_vmd.so 00:06:19.684 SYMLINK libspdk_event_iobuf.so 00:06:19.942 CC module/event/subsystems/accel/accel.o 00:06:20.201 LIB libspdk_event_accel.a 00:06:20.201 SO libspdk_event_accel.so.6.0 00:06:20.201 SYMLINK libspdk_event_accel.so 00:06:20.460 CC module/event/subsystems/bdev/bdev.o 00:06:20.460 LIB libspdk_event_bdev.a 00:06:20.460 SO libspdk_event_bdev.so.6.0 00:06:20.718 SYMLINK libspdk_event_bdev.so 00:06:20.718 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:20.718 CC module/event/subsystems/nbd/nbd.o 00:06:20.718 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:20.718 CC module/event/subsystems/scsi/scsi.o 00:06:20.718 CC module/event/subsystems/ublk/ublk.o 00:06:20.975 LIB libspdk_event_nbd.a 00:06:20.975 LIB libspdk_event_ublk.a 00:06:20.975 LIB libspdk_event_scsi.a 00:06:20.975 SO libspdk_event_nbd.so.6.0 00:06:20.975 SO libspdk_event_ublk.so.3.0 00:06:20.975 SO libspdk_event_scsi.so.6.0 00:06:20.975 SYMLINK libspdk_event_nbd.so 00:06:20.975 SYMLINK libspdk_event_ublk.so 00:06:20.975 SYMLINK libspdk_event_scsi.so 00:06:20.975 LIB libspdk_event_nvmf.a 00:06:20.975 SO libspdk_event_nvmf.so.6.0 00:06:21.234 SYMLINK libspdk_event_nvmf.so 00:06:21.234 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:21.234 CC module/event/subsystems/iscsi/iscsi.o 00:06:21.234 LIB libspdk_event_vhost_scsi.a 00:06:21.234 SO libspdk_event_vhost_scsi.so.3.0 00:06:21.492 LIB libspdk_event_iscsi.a 00:06:21.492 SO libspdk_event_iscsi.so.6.0 00:06:21.492 SYMLINK libspdk_event_vhost_scsi.so 00:06:21.492 SYMLINK libspdk_event_iscsi.so 00:06:21.492 SO libspdk.so.6.0 00:06:21.492 SYMLINK libspdk.so 00:06:21.757 CXX app/trace/trace.o 00:06:21.757 CC app/trace_record/trace_record.o 00:06:21.757 CC app/spdk_nvme_perf/perf.o 00:06:21.757 CC app/spdk_nvme_identify/identify.o 00:06:21.757 CC app/spdk_nvme_discover/discovery_aer.o 00:06:21.757 TEST_HEADER include/spdk/accel.h 00:06:21.757 TEST_HEADER include/spdk/accel_module.h 00:06:21.757 CC app/spdk_lspci/spdk_lspci.o 00:06:21.757 TEST_HEADER include/spdk/assert.h 00:06:21.757 TEST_HEADER include/spdk/barrier.h 00:06:21.757 TEST_HEADER include/spdk/base64.h 00:06:21.757 CC test/rpc_client/rpc_client_test.o 00:06:21.757 CC app/spdk_top/spdk_top.o 00:06:21.757 TEST_HEADER include/spdk/bdev.h 00:06:21.757 TEST_HEADER include/spdk/bdev_module.h 00:06:21.757 TEST_HEADER include/spdk/bdev_zone.h 00:06:21.757 TEST_HEADER include/spdk/bit_array.h 00:06:21.757 TEST_HEADER include/spdk/bit_pool.h 00:06:21.757 TEST_HEADER include/spdk/blob_bdev.h 00:06:21.757 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:21.757 TEST_HEADER include/spdk/blob.h 00:06:21.757 TEST_HEADER include/spdk/blobfs.h 00:06:21.757 TEST_HEADER include/spdk/conf.h 00:06:21.757 TEST_HEADER include/spdk/cpuset.h 00:06:21.757 TEST_HEADER include/spdk/config.h 00:06:21.757 TEST_HEADER include/spdk/crc16.h 00:06:21.757 TEST_HEADER include/spdk/crc32.h 00:06:21.757 TEST_HEADER include/spdk/crc64.h 00:06:21.757 TEST_HEADER include/spdk/dif.h 00:06:21.757 TEST_HEADER include/spdk/dma.h 00:06:21.757 TEST_HEADER include/spdk/endian.h 00:06:21.757 TEST_HEADER include/spdk/env_dpdk.h 00:06:21.757 TEST_HEADER include/spdk/env.h 00:06:21.757 TEST_HEADER include/spdk/event.h 00:06:21.757 TEST_HEADER include/spdk/fd_group.h 00:06:21.757 TEST_HEADER include/spdk/fd.h 00:06:21.757 TEST_HEADER include/spdk/file.h 00:06:21.757 TEST_HEADER include/spdk/fsdev.h 00:06:21.757 TEST_HEADER include/spdk/fsdev_module.h 00:06:21.757 TEST_HEADER include/spdk/ftl.h 00:06:21.757 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:21.757 TEST_HEADER include/spdk/hexlify.h 00:06:21.757 TEST_HEADER include/spdk/gpt_spec.h 00:06:21.757 TEST_HEADER include/spdk/histogram_data.h 00:06:21.757 TEST_HEADER include/spdk/idxd.h 00:06:21.757 TEST_HEADER include/spdk/idxd_spec.h 00:06:21.757 TEST_HEADER include/spdk/init.h 00:06:21.757 TEST_HEADER include/spdk/ioat.h 00:06:21.757 TEST_HEADER include/spdk/ioat_spec.h 00:06:21.757 TEST_HEADER include/spdk/iscsi_spec.h 00:06:21.757 TEST_HEADER include/spdk/json.h 00:06:21.757 TEST_HEADER include/spdk/jsonrpc.h 00:06:21.757 TEST_HEADER include/spdk/keyring.h 00:06:21.757 TEST_HEADER include/spdk/keyring_module.h 00:06:21.757 TEST_HEADER include/spdk/likely.h 00:06:21.757 TEST_HEADER include/spdk/log.h 00:06:21.757 TEST_HEADER include/spdk/lvol.h 00:06:21.757 TEST_HEADER include/spdk/md5.h 00:06:21.757 TEST_HEADER include/spdk/memory.h 00:06:21.757 TEST_HEADER include/spdk/mmio.h 00:06:21.757 TEST_HEADER include/spdk/nbd.h 00:06:21.757 TEST_HEADER include/spdk/net.h 00:06:21.757 TEST_HEADER include/spdk/notify.h 00:06:21.757 TEST_HEADER include/spdk/nvme.h 00:06:21.757 TEST_HEADER include/spdk/nvme_intel.h 00:06:21.757 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:21.757 TEST_HEADER include/spdk/nvme_spec.h 00:06:21.757 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:21.757 TEST_HEADER include/spdk/nvme_zns.h 00:06:21.757 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:21.757 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:21.757 TEST_HEADER include/spdk/nvmf.h 00:06:21.757 TEST_HEADER include/spdk/nvmf_spec.h 00:06:21.757 TEST_HEADER include/spdk/nvmf_transport.h 00:06:21.757 TEST_HEADER include/spdk/opal.h 00:06:21.757 TEST_HEADER include/spdk/opal_spec.h 00:06:21.757 TEST_HEADER include/spdk/pci_ids.h 00:06:21.757 TEST_HEADER include/spdk/queue.h 00:06:21.757 TEST_HEADER include/spdk/pipe.h 00:06:21.757 TEST_HEADER include/spdk/reduce.h 00:06:21.757 TEST_HEADER include/spdk/rpc.h 00:06:21.757 TEST_HEADER include/spdk/scheduler.h 00:06:21.757 TEST_HEADER include/spdk/scsi.h 00:06:21.757 TEST_HEADER include/spdk/sock.h 00:06:21.757 TEST_HEADER include/spdk/scsi_spec.h 00:06:21.757 TEST_HEADER include/spdk/stdinc.h 00:06:21.757 TEST_HEADER include/spdk/string.h 00:06:21.757 TEST_HEADER include/spdk/thread.h 00:06:21.757 TEST_HEADER include/spdk/trace.h 00:06:21.757 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:21.757 TEST_HEADER include/spdk/trace_parser.h 00:06:21.757 TEST_HEADER include/spdk/tree.h 00:06:21.757 TEST_HEADER include/spdk/util.h 00:06:21.757 TEST_HEADER include/spdk/ublk.h 00:06:21.757 TEST_HEADER include/spdk/uuid.h 00:06:21.757 TEST_HEADER include/spdk/version.h 00:06:21.757 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:21.757 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:21.757 TEST_HEADER include/spdk/vhost.h 00:06:21.757 TEST_HEADER include/spdk/vmd.h 00:06:21.757 TEST_HEADER include/spdk/xor.h 00:06:21.758 TEST_HEADER include/spdk/zipf.h 00:06:21.758 CXX test/cpp_headers/accel.o 00:06:21.758 CXX test/cpp_headers/accel_module.o 00:06:21.758 CXX test/cpp_headers/assert.o 00:06:21.758 CXX test/cpp_headers/barrier.o 00:06:21.758 CXX test/cpp_headers/base64.o 00:06:21.758 CXX test/cpp_headers/bdev.o 00:06:21.758 CXX test/cpp_headers/bdev_module.o 00:06:21.758 CXX test/cpp_headers/bdev_zone.o 00:06:21.758 CXX test/cpp_headers/bit_array.o 00:06:21.758 CXX test/cpp_headers/bit_pool.o 00:06:21.758 CXX test/cpp_headers/blob_bdev.o 00:06:21.758 CC app/spdk_dd/spdk_dd.o 00:06:21.758 CXX test/cpp_headers/blobfs_bdev.o 00:06:21.758 CXX test/cpp_headers/blobfs.o 00:06:21.758 CXX test/cpp_headers/blob.o 00:06:21.758 CXX test/cpp_headers/conf.o 00:06:21.758 CXX test/cpp_headers/config.o 00:06:21.758 CXX test/cpp_headers/cpuset.o 00:06:21.758 CXX test/cpp_headers/crc16.o 00:06:21.758 CC app/nvmf_tgt/nvmf_main.o 00:06:21.758 CC app/iscsi_tgt/iscsi_tgt.o 00:06:21.758 CXX test/cpp_headers/crc32.o 00:06:21.758 CC examples/ioat/perf/perf.o 00:06:21.758 CC app/spdk_tgt/spdk_tgt.o 00:06:22.018 CC test/app/jsoncat/jsoncat.o 00:06:22.018 CC examples/util/zipf/zipf.o 00:06:22.019 CC app/fio/nvme/fio_plugin.o 00:06:22.019 CC examples/ioat/verify/verify.o 00:06:22.019 CC test/app/histogram_perf/histogram_perf.o 00:06:22.019 CC test/thread/poller_perf/poller_perf.o 00:06:22.019 CC test/env/memory/memory_ut.o 00:06:22.019 CC test/env/vtophys/vtophys.o 00:06:22.019 CC test/env/pci/pci_ut.o 00:06:22.019 CC test/app/stub/stub.o 00:06:22.019 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:22.019 CC test/app/bdev_svc/bdev_svc.o 00:06:22.019 CC test/dma/test_dma/test_dma.o 00:06:22.019 CC app/fio/bdev/fio_plugin.o 00:06:22.019 LINK spdk_lspci 00:06:22.019 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:22.019 CC test/env/mem_callbacks/mem_callbacks.o 00:06:22.280 LINK rpc_client_test 00:06:22.280 LINK spdk_nvme_discover 00:06:22.280 LINK histogram_perf 00:06:22.280 LINK vtophys 00:06:22.280 LINK spdk_trace_record 00:06:22.280 LINK poller_perf 00:06:22.280 LINK jsoncat 00:06:22.280 LINK interrupt_tgt 00:06:22.280 LINK zipf 00:06:22.280 CXX test/cpp_headers/crc64.o 00:06:22.280 CXX test/cpp_headers/dif.o 00:06:22.280 CXX test/cpp_headers/dma.o 00:06:22.280 LINK nvmf_tgt 00:06:22.280 CXX test/cpp_headers/endian.o 00:06:22.280 LINK env_dpdk_post_init 00:06:22.280 CXX test/cpp_headers/env_dpdk.o 00:06:22.280 CXX test/cpp_headers/env.o 00:06:22.280 CXX test/cpp_headers/event.o 00:06:22.280 CXX test/cpp_headers/fd_group.o 00:06:22.280 LINK stub 00:06:22.280 CXX test/cpp_headers/fd.o 00:06:22.280 LINK iscsi_tgt 00:06:22.280 CXX test/cpp_headers/file.o 00:06:22.280 CXX test/cpp_headers/fsdev.o 00:06:22.280 CXX test/cpp_headers/fsdev_module.o 00:06:22.280 CXX test/cpp_headers/ftl.o 00:06:22.280 CXX test/cpp_headers/fuse_dispatcher.o 00:06:22.280 CXX test/cpp_headers/gpt_spec.o 00:06:22.280 LINK ioat_perf 00:06:22.280 CXX test/cpp_headers/hexlify.o 00:06:22.280 CXX test/cpp_headers/histogram_data.o 00:06:22.546 LINK spdk_tgt 00:06:22.546 LINK bdev_svc 00:06:22.546 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:22.546 LINK verify 00:06:22.546 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:22.546 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:22.546 CXX test/cpp_headers/idxd.o 00:06:22.546 CXX test/cpp_headers/idxd_spec.o 00:06:22.546 CXX test/cpp_headers/init.o 00:06:22.546 CXX test/cpp_headers/ioat.o 00:06:22.546 CXX test/cpp_headers/ioat_spec.o 00:06:22.546 CXX test/cpp_headers/iscsi_spec.o 00:06:22.546 LINK spdk_dd 00:06:22.546 CXX test/cpp_headers/json.o 00:06:22.546 CXX test/cpp_headers/jsonrpc.o 00:06:22.546 LINK spdk_trace 00:06:22.809 CXX test/cpp_headers/keyring.o 00:06:22.809 CXX test/cpp_headers/keyring_module.o 00:06:22.809 CXX test/cpp_headers/likely.o 00:06:22.809 CXX test/cpp_headers/log.o 00:06:22.809 LINK pci_ut 00:06:22.809 CXX test/cpp_headers/lvol.o 00:06:22.809 CXX test/cpp_headers/md5.o 00:06:22.809 CXX test/cpp_headers/memory.o 00:06:22.809 CXX test/cpp_headers/mmio.o 00:06:22.810 CXX test/cpp_headers/nbd.o 00:06:22.810 CXX test/cpp_headers/net.o 00:06:22.810 CXX test/cpp_headers/notify.o 00:06:22.810 CXX test/cpp_headers/nvme.o 00:06:22.810 CXX test/cpp_headers/nvme_intel.o 00:06:22.810 CXX test/cpp_headers/nvme_ocssd.o 00:06:22.810 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:22.810 CXX test/cpp_headers/nvme_spec.o 00:06:22.810 CXX test/cpp_headers/nvme_zns.o 00:06:22.810 CXX test/cpp_headers/nvmf_cmd.o 00:06:22.810 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:22.810 CXX test/cpp_headers/nvmf.o 00:06:22.810 CXX test/cpp_headers/nvmf_spec.o 00:06:23.077 CXX test/cpp_headers/nvmf_transport.o 00:06:23.077 LINK nvme_fuzz 00:06:23.077 CXX test/cpp_headers/opal.o 00:06:23.077 CC test/event/event_perf/event_perf.o 00:06:23.077 CXX test/cpp_headers/opal_spec.o 00:06:23.077 CC test/event/reactor/reactor.o 00:06:23.077 CXX test/cpp_headers/pci_ids.o 00:06:23.077 CXX test/cpp_headers/pipe.o 00:06:23.077 CC test/event/reactor_perf/reactor_perf.o 00:06:23.077 CC examples/sock/hello_world/hello_sock.o 00:06:23.077 LINK spdk_nvme 00:06:23.077 CC examples/thread/thread/thread_ex.o 00:06:23.077 LINK spdk_bdev 00:06:23.077 LINK test_dma 00:06:23.077 CXX test/cpp_headers/queue.o 00:06:23.077 CC examples/idxd/perf/perf.o 00:06:23.077 CC examples/vmd/lsvmd/lsvmd.o 00:06:23.077 CC test/event/app_repeat/app_repeat.o 00:06:23.077 CXX test/cpp_headers/reduce.o 00:06:23.077 CXX test/cpp_headers/rpc.o 00:06:23.077 CC examples/vmd/led/led.o 00:06:23.077 CXX test/cpp_headers/scheduler.o 00:06:23.077 CXX test/cpp_headers/scsi.o 00:06:23.077 CXX test/cpp_headers/scsi_spec.o 00:06:23.077 CC test/event/scheduler/scheduler.o 00:06:23.077 CXX test/cpp_headers/sock.o 00:06:23.077 CXX test/cpp_headers/stdinc.o 00:06:23.077 CXX test/cpp_headers/string.o 00:06:23.077 CXX test/cpp_headers/thread.o 00:06:23.338 CXX test/cpp_headers/trace.o 00:06:23.338 CXX test/cpp_headers/trace_parser.o 00:06:23.338 CXX test/cpp_headers/tree.o 00:06:23.338 CXX test/cpp_headers/ublk.o 00:06:23.338 CC app/vhost/vhost.o 00:06:23.338 CXX test/cpp_headers/util.o 00:06:23.338 CXX test/cpp_headers/uuid.o 00:06:23.338 CXX test/cpp_headers/version.o 00:06:23.338 CXX test/cpp_headers/vfio_user_pci.o 00:06:23.338 CXX test/cpp_headers/vfio_user_spec.o 00:06:23.338 CXX test/cpp_headers/vhost.o 00:06:23.338 LINK reactor 00:06:23.338 CXX test/cpp_headers/vmd.o 00:06:23.338 CXX test/cpp_headers/xor.o 00:06:23.338 CXX test/cpp_headers/zipf.o 00:06:23.338 LINK event_perf 00:06:23.338 LINK vhost_fuzz 00:06:23.338 LINK reactor_perf 00:06:23.338 LINK spdk_nvme_perf 00:06:23.338 LINK mem_callbacks 00:06:23.338 LINK lsvmd 00:06:23.338 LINK spdk_nvme_identify 00:06:23.338 LINK app_repeat 00:06:23.598 LINK led 00:06:23.598 LINK spdk_top 00:06:23.598 LINK hello_sock 00:06:23.598 LINK thread 00:06:23.598 LINK scheduler 00:06:23.598 LINK vhost 00:06:23.598 LINK idxd_perf 00:06:23.598 CC test/nvme/aer/aer.o 00:06:23.598 CC test/nvme/e2edp/nvme_dp.o 00:06:23.598 CC test/nvme/connect_stress/connect_stress.o 00:06:23.598 CC test/nvme/err_injection/err_injection.o 00:06:23.598 CC test/nvme/reset/reset.o 00:06:23.598 CC test/nvme/overhead/overhead.o 00:06:23.598 CC test/nvme/reserve/reserve.o 00:06:23.598 CC test/nvme/startup/startup.o 00:06:23.598 CC test/nvme/simple_copy/simple_copy.o 00:06:23.598 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:23.857 CC test/nvme/boot_partition/boot_partition.o 00:06:23.857 CC test/nvme/cuse/cuse.o 00:06:23.857 CC test/nvme/sgl/sgl.o 00:06:23.857 CC test/nvme/compliance/nvme_compliance.o 00:06:23.857 CC test/nvme/fdp/fdp.o 00:06:23.857 CC test/nvme/fused_ordering/fused_ordering.o 00:06:23.857 CC test/accel/dif/dif.o 00:06:23.857 CC test/blobfs/mkfs/mkfs.o 00:06:23.857 CC test/lvol/esnap/esnap.o 00:06:23.857 LINK boot_partition 00:06:24.116 LINK connect_stress 00:06:24.116 LINK doorbell_aers 00:06:24.116 CC examples/nvme/hotplug/hotplug.o 00:06:24.116 CC examples/nvme/hello_world/hello_world.o 00:06:24.116 CC examples/nvme/reconnect/reconnect.o 00:06:24.116 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:24.116 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:24.116 LINK fused_ordering 00:06:24.116 CC examples/nvme/arbitration/arbitration.o 00:06:24.116 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:24.116 CC examples/nvme/abort/abort.o 00:06:24.116 LINK reserve 00:06:24.116 LINK memory_ut 00:06:24.116 LINK err_injection 00:06:24.116 LINK mkfs 00:06:24.116 LINK startup 00:06:24.116 LINK simple_copy 00:06:24.116 LINK aer 00:06:24.116 LINK reset 00:06:24.116 LINK sgl 00:06:24.116 CC examples/accel/perf/accel_perf.o 00:06:24.116 LINK overhead 00:06:24.116 CC examples/blob/cli/blobcli.o 00:06:24.116 CC examples/blob/hello_world/hello_blob.o 00:06:24.116 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:24.116 LINK nvme_compliance 00:06:24.116 LINK nvme_dp 00:06:24.374 LINK pmr_persistence 00:06:24.374 LINK fdp 00:06:24.374 LINK hotplug 00:06:24.374 LINK hello_world 00:06:24.374 LINK cmb_copy 00:06:24.374 LINK reconnect 00:06:24.374 LINK hello_blob 00:06:24.633 LINK arbitration 00:06:24.633 LINK abort 00:06:24.633 LINK hello_fsdev 00:06:24.633 LINK dif 00:06:24.633 LINK nvme_manage 00:06:24.633 LINK accel_perf 00:06:24.892 LINK blobcli 00:06:24.892 LINK iscsi_fuzz 00:06:25.150 CC test/bdev/bdevio/bdevio.o 00:06:25.150 CC examples/bdev/hello_world/hello_bdev.o 00:06:25.150 CC examples/bdev/bdevperf/bdevperf.o 00:06:25.408 LINK hello_bdev 00:06:25.408 LINK cuse 00:06:25.408 LINK bdevio 00:06:25.974 LINK bdevperf 00:06:26.232 CC examples/nvmf/nvmf/nvmf.o 00:06:26.490 LINK nvmf 00:06:29.018 LINK esnap 00:06:29.275 00:06:29.275 real 1m6.548s 00:06:29.275 user 9m4.179s 00:06:29.275 sys 2m0.128s 00:06:29.275 12:20:58 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:29.275 12:20:58 make -- common/autotest_common.sh@10 -- $ set +x 00:06:29.275 ************************************ 00:06:29.275 END TEST make 00:06:29.275 ************************************ 00:06:29.275 12:20:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:29.275 12:20:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:29.275 12:20:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:29.275 12:20:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.275 12:20:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:29.275 12:20:58 -- pm/common@44 -- $ pid=419516 00:06:29.275 12:20:58 -- pm/common@50 -- $ kill -TERM 419516 00:06:29.275 12:20:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.275 12:20:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:29.275 12:20:58 -- pm/common@44 -- $ pid=419518 00:06:29.275 12:20:58 -- pm/common@50 -- $ kill -TERM 419518 00:06:29.275 12:20:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.275 12:20:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:29.275 12:20:58 -- pm/common@44 -- $ pid=419520 00:06:29.275 12:20:58 -- pm/common@50 -- $ kill -TERM 419520 00:06:29.275 12:20:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.275 12:20:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:29.275 12:20:58 -- pm/common@44 -- $ pid=419550 00:06:29.275 12:20:58 -- pm/common@50 -- $ sudo -E kill -TERM 419550 00:06:29.275 12:20:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:29.275 12:20:58 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:29.275 12:20:58 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.275 12:20:58 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.275 12:20:58 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.534 12:20:58 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.534 12:20:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.534 12:20:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.534 12:20:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.534 12:20:58 -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.534 12:20:58 -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.534 12:20:58 -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.534 12:20:58 -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.534 12:20:58 -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.534 12:20:58 -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.534 12:20:58 -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.534 12:20:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.534 12:20:58 -- scripts/common.sh@344 -- # case "$op" in 00:06:29.534 12:20:58 -- scripts/common.sh@345 -- # : 1 00:06:29.534 12:20:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.534 12:20:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.534 12:20:58 -- scripts/common.sh@365 -- # decimal 1 00:06:29.534 12:20:58 -- scripts/common.sh@353 -- # local d=1 00:06:29.534 12:20:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.534 12:20:58 -- scripts/common.sh@355 -- # echo 1 00:06:29.534 12:20:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.534 12:20:58 -- scripts/common.sh@366 -- # decimal 2 00:06:29.534 12:20:58 -- scripts/common.sh@353 -- # local d=2 00:06:29.534 12:20:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.534 12:20:58 -- scripts/common.sh@355 -- # echo 2 00:06:29.534 12:20:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.534 12:20:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.534 12:20:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.534 12:20:58 -- scripts/common.sh@368 -- # return 0 00:06:29.534 12:20:58 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.534 12:20:58 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.534 --rc genhtml_branch_coverage=1 00:06:29.534 --rc genhtml_function_coverage=1 00:06:29.534 --rc genhtml_legend=1 00:06:29.534 --rc geninfo_all_blocks=1 00:06:29.534 --rc geninfo_unexecuted_blocks=1 00:06:29.534 00:06:29.534 ' 00:06:29.534 12:20:58 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.534 --rc genhtml_branch_coverage=1 00:06:29.534 --rc genhtml_function_coverage=1 00:06:29.534 --rc genhtml_legend=1 00:06:29.534 --rc geninfo_all_blocks=1 00:06:29.534 --rc geninfo_unexecuted_blocks=1 00:06:29.534 00:06:29.534 ' 00:06:29.534 12:20:58 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.534 --rc genhtml_branch_coverage=1 00:06:29.534 --rc genhtml_function_coverage=1 00:06:29.534 --rc genhtml_legend=1 00:06:29.534 --rc geninfo_all_blocks=1 00:06:29.534 --rc geninfo_unexecuted_blocks=1 00:06:29.534 00:06:29.534 ' 00:06:29.534 12:20:58 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.534 --rc genhtml_branch_coverage=1 00:06:29.534 --rc genhtml_function_coverage=1 00:06:29.534 --rc genhtml_legend=1 00:06:29.534 --rc geninfo_all_blocks=1 00:06:29.534 --rc geninfo_unexecuted_blocks=1 00:06:29.534 00:06:29.534 ' 00:06:29.534 12:20:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.534 12:20:58 -- nvmf/common.sh@7 -- # uname -s 00:06:29.534 12:20:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.534 12:20:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.534 12:20:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.534 12:20:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.534 12:20:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.534 12:20:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.534 12:20:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.534 12:20:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.534 12:20:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.534 12:20:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.534 12:20:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.534 12:20:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:29.534 12:20:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.534 12:20:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.534 12:20:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.534 12:20:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.534 12:20:58 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.534 12:20:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.534 12:20:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.534 12:20:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.534 12:20:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.534 12:20:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.534 12:20:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.534 12:20:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.534 12:20:58 -- paths/export.sh@5 -- # export PATH 00:06:29.535 12:20:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.535 12:20:58 -- nvmf/common.sh@51 -- # : 0 00:06:29.535 12:20:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.535 12:20:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.535 12:20:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.535 12:20:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.535 12:20:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.535 12:20:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.535 12:20:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.535 12:20:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.535 12:20:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.535 12:20:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:29.535 12:20:58 -- spdk/autotest.sh@32 -- # uname -s 00:06:29.535 12:20:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:29.535 12:20:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:29.535 12:20:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:29.535 12:20:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:29.535 12:20:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:29.535 12:20:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:29.535 12:20:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:29.535 12:20:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:29.535 12:20:58 -- spdk/autotest.sh@48 -- # udevadm_pid=500674 00:06:29.535 12:20:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:29.535 12:20:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:29.535 12:20:58 -- pm/common@17 -- # local monitor 00:06:29.535 12:20:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.535 12:20:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.535 12:20:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.535 12:20:58 -- pm/common@21 -- # date +%s 00:06:29.535 12:20:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:29.535 12:20:58 -- pm/common@21 -- # date +%s 00:06:29.535 12:20:58 -- pm/common@25 -- # sleep 1 00:06:29.535 12:20:58 -- pm/common@21 -- # date +%s 00:06:29.535 12:20:58 -- pm/common@21 -- # date +%s 00:06:29.535 12:20:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730805658 00:06:29.535 12:20:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730805658 00:06:29.535 12:20:58 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730805658 00:06:29.535 12:20:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730805658 00:06:29.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730805658_collect-vmstat.pm.log 00:06:29.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730805658_collect-cpu-load.pm.log 00:06:29.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730805658_collect-cpu-temp.pm.log 00:06:29.535 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730805658_collect-bmc-pm.bmc.pm.log 00:06:30.469 12:20:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:30.469 12:20:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:30.469 12:20:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.469 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.469 12:20:59 -- spdk/autotest.sh@59 -- # create_test_list 00:06:30.469 12:20:59 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:30.469 12:20:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.469 12:20:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:30.469 12:20:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.469 12:20:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.469 12:20:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:30.469 12:20:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.469 12:20:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:30.469 12:20:59 -- common/autotest_common.sh@1455 -- # uname 00:06:30.469 12:20:59 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:30.469 12:20:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:30.469 12:20:59 -- common/autotest_common.sh@1475 -- # uname 00:06:30.469 12:20:59 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:30.469 12:20:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:30.469 12:20:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:30.469 lcov: LCOV version 1.15 00:06:30.469 12:20:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:48.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:48.551 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:10.478 12:21:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:10.478 12:21:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.478 12:21:36 -- common/autotest_common.sh@10 -- # set +x 00:07:10.478 12:21:36 -- spdk/autotest.sh@78 -- # rm -f 00:07:10.478 12:21:36 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:10.478 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:07:10.478 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:07:10.478 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:07:10.478 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:07:10.478 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:07:10.478 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:07:10.478 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:07:10.478 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:07:10.478 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:07:10.478 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:07:10.478 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:07:10.478 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:07:10.478 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:07:10.478 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:07:10.478 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:07:10.478 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:07:10.478 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:07:10.478 12:21:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:10.478 12:21:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:10.479 12:21:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:10.479 12:21:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:10.479 12:21:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.479 12:21:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:10.479 12:21:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:10.479 12:21:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:10.479 12:21:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.479 12:21:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:10.479 12:21:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:10.479 12:21:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:10.479 12:21:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:10.479 12:21:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:10.479 12:21:37 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:10.479 No valid GPT data, bailing 00:07:10.479 12:21:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:10.479 12:21:37 -- scripts/common.sh@394 -- # pt= 00:07:10.479 12:21:37 -- scripts/common.sh@395 -- # return 1 00:07:10.479 12:21:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:10.479 1+0 records in 00:07:10.479 1+0 records out 00:07:10.479 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00166727 s, 629 MB/s 00:07:10.479 12:21:37 -- spdk/autotest.sh@105 -- # sync 00:07:10.479 12:21:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:10.479 12:21:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:10.479 12:21:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:10.479 12:21:39 -- spdk/autotest.sh@111 -- # uname -s 00:07:10.479 12:21:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:10.479 12:21:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:10.479 12:21:39 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:11.857 Hugepages 00:07:11.857 node hugesize free / total 00:07:11.857 node0 1048576kB 0 / 0 00:07:11.857 node0 2048kB 0 / 0 00:07:11.857 node1 1048576kB 0 / 0 00:07:11.857 node1 2048kB 0 / 0 00:07:11.857 00:07:11.857 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:11.857 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:07:11.857 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:07:11.857 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:07:11.857 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:11.857 12:21:41 -- spdk/autotest.sh@117 -- # uname -s 00:07:11.857 12:21:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:11.857 12:21:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:11.857 12:21:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:13.234 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:13.234 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:13.234 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:13.234 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:13.234 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:13.234 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:13.234 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:13.235 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:13.235 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:14.175 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:07:14.175 12:21:43 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:15.557 12:21:44 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:15.557 12:21:44 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:15.557 12:21:44 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:15.557 12:21:44 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:15.557 12:21:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:15.557 12:21:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:15.557 12:21:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:15.557 12:21:44 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:15.557 12:21:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:15.557 12:21:44 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:15.557 12:21:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:07:15.557 12:21:44 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:16.492 Waiting for block devices as requested 00:07:16.492 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:07:16.751 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:16.751 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:16.751 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:17.010 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:17.010 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:17.010 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:17.010 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:17.270 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:17.270 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:17.270 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:17.529 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:17.529 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:17.529 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:17.529 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:17.788 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:17.788 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:17.788 12:21:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:17.788 12:21:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:07:17.788 12:21:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:17.788 12:21:47 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:07:17.788 12:21:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:07:17.788 12:21:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:07:17.788 12:21:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:07:17.788 12:21:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:17.788 12:21:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:17.788 12:21:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:18.046 12:21:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:18.046 12:21:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:18.046 12:21:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:18.046 12:21:47 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:07:18.046 12:21:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:18.046 12:21:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:18.046 12:21:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:18.046 12:21:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:18.046 12:21:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:18.046 12:21:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:18.046 12:21:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:18.046 12:21:47 -- common/autotest_common.sh@1541 -- # continue 00:07:18.046 12:21:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:18.046 12:21:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.046 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:18.046 12:21:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:18.046 12:21:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:18.046 12:21:47 -- common/autotest_common.sh@10 -- # set +x 00:07:18.046 12:21:47 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:19.423 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:19.423 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:19.423 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:20.361 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:07:20.361 12:21:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:20.361 12:21:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.361 12:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:20.361 12:21:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:20.361 12:21:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:20.361 12:21:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:20.361 12:21:49 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:20.361 12:21:49 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:20.361 12:21:49 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:20.361 12:21:49 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:20.361 12:21:49 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:20.361 12:21:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:20.361 12:21:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:20.361 12:21:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:20.361 12:21:49 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:20.361 12:21:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:20.361 12:21:49 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:20.361 12:21:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:07:20.361 12:21:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:20.361 12:21:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:07:20.361 12:21:49 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:07:20.361 12:21:49 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:20.361 12:21:49 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:07:20.361 12:21:49 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:07:20.361 12:21:49 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:07:20.361 12:21:49 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:07:20.361 12:21:49 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=511940 00:07:20.361 12:21:49 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.361 12:21:49 -- common/autotest_common.sh@1583 -- # waitforlisten 511940 00:07:20.361 12:21:49 -- common/autotest_common.sh@833 -- # '[' -z 511940 ']' 00:07:20.361 12:21:49 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.361 12:21:49 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:20.361 12:21:49 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.361 12:21:49 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:20.361 12:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:20.361 [2024-11-05 12:21:49.598851] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:20.361 [2024-11-05 12:21:49.598949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511940 ] 00:07:20.619 [2024-11-05 12:21:49.664184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.619 [2024-11-05 12:21:49.707078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.877 12:21:49 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:20.877 12:21:49 -- common/autotest_common.sh@866 -- # return 0 00:07:20.877 12:21:49 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:07:20.877 12:21:49 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:07:20.877 12:21:49 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:07:24.161 nvme0n1 00:07:24.161 12:21:53 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:24.161 [2024-11-05 12:21:53.288896] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:07:24.161 [2024-11-05 12:21:53.288943] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:07:24.161 request: 00:07:24.161 { 00:07:24.161 "nvme_ctrlr_name": "nvme0", 00:07:24.161 "password": "test", 00:07:24.161 "method": "bdev_nvme_opal_revert", 00:07:24.161 "req_id": 1 00:07:24.161 } 00:07:24.161 Got JSON-RPC error response 00:07:24.161 response: 00:07:24.161 { 00:07:24.161 "code": -32603, 00:07:24.161 "message": "Internal error" 00:07:24.161 } 00:07:24.161 12:21:53 -- common/autotest_common.sh@1589 -- # true 00:07:24.161 12:21:53 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:07:24.161 12:21:53 -- common/autotest_common.sh@1593 -- # killprocess 511940 00:07:24.161 12:21:53 -- common/autotest_common.sh@952 -- # '[' -z 511940 ']' 00:07:24.161 12:21:53 -- common/autotest_common.sh@956 -- # kill -0 511940 00:07:24.161 12:21:53 -- common/autotest_common.sh@957 -- # uname 00:07:24.161 12:21:53 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.161 12:21:53 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 511940 00:07:24.161 12:21:53 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:24.161 12:21:53 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:24.161 12:21:53 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 511940' 00:07:24.161 killing process with pid 511940 00:07:24.161 12:21:53 -- common/autotest_common.sh@971 -- # kill 511940 00:07:24.161 12:21:53 -- common/autotest_common.sh@976 -- # wait 511940 00:07:26.057 12:21:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:26.057 12:21:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:26.057 12:21:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:26.057 12:21:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:26.057 12:21:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:26.057 12:21:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.057 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 12:21:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:26.057 12:21:55 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:26.057 12:21:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.057 12:21:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.057 12:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.057 ************************************ 00:07:26.057 START TEST env 00:07:26.057 ************************************ 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:26.057 * Looking for test storage... 00:07:26.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.057 12:21:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.057 12:21:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.057 12:21:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.057 12:21:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.057 12:21:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.057 12:21:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.057 12:21:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.057 12:21:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.057 12:21:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.057 12:21:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.057 12:21:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.057 12:21:55 env -- scripts/common.sh@344 -- # case "$op" in 00:07:26.057 12:21:55 env -- scripts/common.sh@345 -- # : 1 00:07:26.057 12:21:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.057 12:21:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.057 12:21:55 env -- scripts/common.sh@365 -- # decimal 1 00:07:26.057 12:21:55 env -- scripts/common.sh@353 -- # local d=1 00:07:26.057 12:21:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.057 12:21:55 env -- scripts/common.sh@355 -- # echo 1 00:07:26.057 12:21:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.057 12:21:55 env -- scripts/common.sh@366 -- # decimal 2 00:07:26.057 12:21:55 env -- scripts/common.sh@353 -- # local d=2 00:07:26.057 12:21:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.057 12:21:55 env -- scripts/common.sh@355 -- # echo 2 00:07:26.057 12:21:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.057 12:21:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.057 12:21:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.057 12:21:55 env -- scripts/common.sh@368 -- # return 0 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.057 --rc genhtml_branch_coverage=1 00:07:26.057 --rc genhtml_function_coverage=1 00:07:26.057 --rc genhtml_legend=1 00:07:26.057 --rc geninfo_all_blocks=1 00:07:26.057 --rc geninfo_unexecuted_blocks=1 00:07:26.057 00:07:26.057 ' 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.057 --rc genhtml_branch_coverage=1 00:07:26.057 --rc genhtml_function_coverage=1 00:07:26.057 --rc genhtml_legend=1 00:07:26.057 --rc geninfo_all_blocks=1 00:07:26.057 --rc geninfo_unexecuted_blocks=1 00:07:26.057 00:07:26.057 ' 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.057 --rc genhtml_branch_coverage=1 00:07:26.057 --rc genhtml_function_coverage=1 00:07:26.057 --rc genhtml_legend=1 00:07:26.057 --rc geninfo_all_blocks=1 00:07:26.057 --rc geninfo_unexecuted_blocks=1 00:07:26.057 00:07:26.057 ' 00:07:26.057 12:21:55 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.058 --rc genhtml_branch_coverage=1 00:07:26.058 --rc genhtml_function_coverage=1 00:07:26.058 --rc genhtml_legend=1 00:07:26.058 --rc geninfo_all_blocks=1 00:07:26.058 --rc geninfo_unexecuted_blocks=1 00:07:26.058 00:07:26.058 ' 00:07:26.058 12:21:55 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:26.058 12:21:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.058 12:21:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.058 12:21:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:26.058 ************************************ 00:07:26.058 START TEST env_memory 00:07:26.058 ************************************ 00:07:26.058 12:21:55 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:26.058 00:07:26.058 00:07:26.058 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.058 http://cunit.sourceforge.net/ 00:07:26.058 00:07:26.058 00:07:26.058 Suite: memory 00:07:26.316 Test: alloc and free memory map ...[2024-11-05 12:21:55.304780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:26.316 passed 00:07:26.316 Test: mem map translation ...[2024-11-05 12:21:55.326318] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:26.316 [2024-11-05 12:21:55.326340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:26.316 [2024-11-05 12:21:55.326398] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:26.316 [2024-11-05 12:21:55.326410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:26.316 passed 00:07:26.316 Test: mem map registration ...[2024-11-05 12:21:55.369917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:26.316 [2024-11-05 12:21:55.369938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:26.316 passed 00:07:26.316 Test: mem map adjacent registrations ...passed 00:07:26.316 00:07:26.316 Run Summary: Type Total Ran Passed Failed Inactive 00:07:26.316 suites 1 1 n/a 0 0 00:07:26.316 tests 4 4 4 0 0 00:07:26.316 asserts 152 152 152 0 n/a 00:07:26.316 00:07:26.316 Elapsed time = 0.146 seconds 00:07:26.316 00:07:26.316 real 0m0.154s 00:07:26.316 user 0m0.143s 00:07:26.316 sys 0m0.010s 00:07:26.316 12:21:55 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.316 12:21:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 ************************************ 00:07:26.316 END TEST env_memory 00:07:26.316 ************************************ 00:07:26.316 12:21:55 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:26.316 12:21:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.316 12:21:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.316 12:21:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 ************************************ 00:07:26.316 START TEST env_vtophys 00:07:26.316 ************************************ 00:07:26.316 12:21:55 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:26.316 EAL: lib.eal log level changed from notice to debug 00:07:26.316 EAL: Detected lcore 0 as core 0 on socket 0 00:07:26.316 EAL: Detected lcore 1 as core 1 on socket 0 00:07:26.316 EAL: Detected lcore 2 as core 2 on socket 0 00:07:26.316 EAL: Detected lcore 3 as core 3 on socket 0 00:07:26.316 EAL: Detected lcore 4 as core 4 on socket 0 00:07:26.316 EAL: Detected lcore 5 as core 5 on socket 0 00:07:26.316 EAL: Detected lcore 6 as core 8 on socket 0 00:07:26.316 EAL: Detected lcore 7 as core 9 on socket 0 00:07:26.316 EAL: Detected lcore 8 as core 10 on socket 0 00:07:26.316 EAL: Detected lcore 9 as core 11 on socket 0 00:07:26.316 EAL: Detected lcore 10 as core 12 on socket 0 00:07:26.317 EAL: Detected lcore 11 as core 13 on socket 0 00:07:26.317 EAL: Detected lcore 12 as core 0 on socket 1 00:07:26.317 EAL: Detected lcore 13 as core 1 on socket 1 00:07:26.317 EAL: Detected lcore 14 as core 2 on socket 1 00:07:26.317 EAL: Detected lcore 15 as core 3 on socket 1 00:07:26.317 EAL: Detected lcore 16 as core 4 on socket 1 00:07:26.317 EAL: Detected lcore 17 as core 5 on socket 1 00:07:26.317 EAL: Detected lcore 18 as core 8 on socket 1 00:07:26.317 EAL: Detected lcore 19 as core 9 on socket 1 00:07:26.317 EAL: Detected lcore 20 as core 10 on socket 1 00:07:26.317 EAL: Detected lcore 21 as core 11 on socket 1 00:07:26.317 EAL: Detected lcore 22 as core 12 on socket 1 00:07:26.317 EAL: Detected lcore 23 as core 13 on socket 1 00:07:26.317 EAL: Detected lcore 24 as core 0 on socket 0 00:07:26.317 EAL: Detected lcore 25 as core 1 on socket 0 00:07:26.317 EAL: Detected lcore 26 as core 2 on socket 0 00:07:26.317 EAL: Detected lcore 27 as core 3 on socket 0 00:07:26.317 EAL: Detected lcore 28 as core 4 on socket 0 00:07:26.317 EAL: Detected lcore 29 as core 5 on socket 0 00:07:26.317 EAL: Detected lcore 30 as core 8 on socket 0 00:07:26.317 EAL: Detected lcore 31 as core 9 on socket 0 00:07:26.317 EAL: Detected lcore 32 as core 10 on socket 0 00:07:26.317 EAL: Detected lcore 33 as core 11 on socket 0 00:07:26.317 EAL: Detected lcore 34 as core 12 on socket 0 00:07:26.317 EAL: Detected lcore 35 as core 13 on socket 0 00:07:26.317 EAL: Detected lcore 36 as core 0 on socket 1 00:07:26.317 EAL: Detected lcore 37 as core 1 on socket 1 00:07:26.317 EAL: Detected lcore 38 as core 2 on socket 1 00:07:26.317 EAL: Detected lcore 39 as core 3 on socket 1 00:07:26.317 EAL: Detected lcore 40 as core 4 on socket 1 00:07:26.317 EAL: Detected lcore 41 as core 5 on socket 1 00:07:26.317 EAL: Detected lcore 42 as core 8 on socket 1 00:07:26.317 EAL: Detected lcore 43 as core 9 on socket 1 00:07:26.317 EAL: Detected lcore 44 as core 10 on socket 1 00:07:26.317 EAL: Detected lcore 45 as core 11 on socket 1 00:07:26.317 EAL: Detected lcore 46 as core 12 on socket 1 00:07:26.317 EAL: Detected lcore 47 as core 13 on socket 1 00:07:26.317 EAL: Maximum logical cores by configuration: 128 00:07:26.317 EAL: Detected CPU lcores: 48 00:07:26.317 EAL: Detected NUMA nodes: 2 00:07:26.317 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:26.317 EAL: Detected shared linkage of DPDK 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:07:26.317 EAL: Registered [vdev] bus. 00:07:26.317 EAL: bus.vdev log level changed from disabled to notice 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:07:26.317 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:07:26.317 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:07:26.317 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:07:26.317 EAL: No shared files mode enabled, IPC will be disabled 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: Bus pci wants IOVA as 'DC' 00:07:26.317 EAL: Bus vdev wants IOVA as 'DC' 00:07:26.317 EAL: Buses did not request a specific IOVA mode. 00:07:26.317 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:26.317 EAL: Selected IOVA mode 'VA' 00:07:26.317 EAL: Probing VFIO support... 00:07:26.317 EAL: IOMMU type 1 (Type 1) is supported 00:07:26.317 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:26.317 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:26.317 EAL: VFIO support initialized 00:07:26.317 EAL: Ask a virtual area of 0x2e000 bytes 00:07:26.317 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:26.317 EAL: Setting up physically contiguous memory... 00:07:26.317 EAL: Setting maximum number of open files to 524288 00:07:26.317 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:26.317 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:26.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:26.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:26.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:26.317 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:26.317 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:26.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:26.317 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:26.317 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:26.317 EAL: Hugepages will be freed exactly as allocated. 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: TSC frequency is ~2700000 KHz 00:07:26.317 EAL: Main lcore 0 is ready (tid=7fd59c5e5a00;cpuset=[0]) 00:07:26.317 EAL: Trying to obtain current memory policy. 00:07:26.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.317 EAL: Restoring previous memory policy: 0 00:07:26.317 EAL: request: mp_malloc_sync 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: Heap on socket 0 was expanded by 2MB 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:26.317 EAL: Mem event callback 'spdk:(nil)' registered 00:07:26.317 00:07:26.317 00:07:26.317 CUnit - A unit testing framework for C - Version 2.1-3 00:07:26.317 http://cunit.sourceforge.net/ 00:07:26.317 00:07:26.317 00:07:26.317 Suite: components_suite 00:07:26.317 Test: vtophys_malloc_test ...passed 00:07:26.317 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:26.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.317 EAL: Restoring previous memory policy: 4 00:07:26.317 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.317 EAL: request: mp_malloc_sync 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: Heap on socket 0 was expanded by 4MB 00:07:26.317 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.317 EAL: request: mp_malloc_sync 00:07:26.317 EAL: No shared files mode enabled, IPC is disabled 00:07:26.317 EAL: Heap on socket 0 was shrunk by 4MB 00:07:26.317 EAL: Trying to obtain current memory policy. 00:07:26.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.575 EAL: Restoring previous memory policy: 4 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was expanded by 6MB 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was shrunk by 6MB 00:07:26.575 EAL: Trying to obtain current memory policy. 00:07:26.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.575 EAL: Restoring previous memory policy: 4 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was expanded by 10MB 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was shrunk by 10MB 00:07:26.575 EAL: Trying to obtain current memory policy. 00:07:26.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.575 EAL: Restoring previous memory policy: 4 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was expanded by 18MB 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was shrunk by 18MB 00:07:26.575 EAL: Trying to obtain current memory policy. 00:07:26.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.575 EAL: Restoring previous memory policy: 4 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was expanded by 34MB 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was shrunk by 34MB 00:07:26.575 EAL: Trying to obtain current memory policy. 00:07:26.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.575 EAL: Restoring previous memory policy: 4 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was expanded by 66MB 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was shrunk by 66MB 00:07:26.575 EAL: Trying to obtain current memory policy. 00:07:26.575 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.575 EAL: Restoring previous memory policy: 4 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.575 EAL: request: mp_malloc_sync 00:07:26.575 EAL: No shared files mode enabled, IPC is disabled 00:07:26.575 EAL: Heap on socket 0 was expanded by 130MB 00:07:26.575 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.576 EAL: request: mp_malloc_sync 00:07:26.576 EAL: No shared files mode enabled, IPC is disabled 00:07:26.576 EAL: Heap on socket 0 was shrunk by 130MB 00:07:26.576 EAL: Trying to obtain current memory policy. 00:07:26.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.576 EAL: Restoring previous memory policy: 4 00:07:26.576 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.576 EAL: request: mp_malloc_sync 00:07:26.576 EAL: No shared files mode enabled, IPC is disabled 00:07:26.576 EAL: Heap on socket 0 was expanded by 258MB 00:07:26.833 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.833 EAL: request: mp_malloc_sync 00:07:26.833 EAL: No shared files mode enabled, IPC is disabled 00:07:26.833 EAL: Heap on socket 0 was shrunk by 258MB 00:07:26.833 EAL: Trying to obtain current memory policy. 00:07:26.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:26.833 EAL: Restoring previous memory policy: 4 00:07:26.833 EAL: Calling mem event callback 'spdk:(nil)' 00:07:26.833 EAL: request: mp_malloc_sync 00:07:26.833 EAL: No shared files mode enabled, IPC is disabled 00:07:26.833 EAL: Heap on socket 0 was expanded by 514MB 00:07:27.090 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.090 EAL: request: mp_malloc_sync 00:07:27.090 EAL: No shared files mode enabled, IPC is disabled 00:07:27.090 EAL: Heap on socket 0 was shrunk by 514MB 00:07:27.090 EAL: Trying to obtain current memory policy. 00:07:27.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.348 EAL: Restoring previous memory policy: 4 00:07:27.348 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.348 EAL: request: mp_malloc_sync 00:07:27.348 EAL: No shared files mode enabled, IPC is disabled 00:07:27.348 EAL: Heap on socket 0 was expanded by 1026MB 00:07:27.606 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.864 EAL: request: mp_malloc_sync 00:07:27.864 EAL: No shared files mode enabled, IPC is disabled 00:07:27.864 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:27.864 passed 00:07:27.864 00:07:27.864 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.864 suites 1 1 n/a 0 0 00:07:27.864 tests 2 2 2 0 0 00:07:27.864 asserts 497 497 497 0 n/a 00:07:27.864 00:07:27.864 Elapsed time = 1.323 seconds 00:07:27.864 EAL: Calling mem event callback 'spdk:(nil)' 00:07:27.864 EAL: request: mp_malloc_sync 00:07:27.864 EAL: No shared files mode enabled, IPC is disabled 00:07:27.864 EAL: Heap on socket 0 was shrunk by 2MB 00:07:27.864 EAL: No shared files mode enabled, IPC is disabled 00:07:27.864 EAL: No shared files mode enabled, IPC is disabled 00:07:27.864 EAL: No shared files mode enabled, IPC is disabled 00:07:27.864 00:07:27.864 real 0m1.440s 00:07:27.864 user 0m0.849s 00:07:27.864 sys 0m0.560s 00:07:27.864 12:21:56 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.864 12:21:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:27.864 ************************************ 00:07:27.864 END TEST env_vtophys 00:07:27.864 ************************************ 00:07:27.864 12:21:56 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:27.864 12:21:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:27.864 12:21:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.864 12:21:56 env -- common/autotest_common.sh@10 -- # set +x 00:07:27.864 ************************************ 00:07:27.864 START TEST env_pci 00:07:27.864 ************************************ 00:07:27.864 12:21:56 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:27.864 00:07:27.864 00:07:27.864 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.864 http://cunit.sourceforge.net/ 00:07:27.864 00:07:27.864 00:07:27.864 Suite: pci 00:07:27.864 Test: pci_hook ...[2024-11-05 12:21:56.975726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 512837 has claimed it 00:07:27.864 EAL: Cannot find device (10000:00:01.0) 00:07:27.864 EAL: Failed to attach device on primary process 00:07:27.864 passed 00:07:27.864 00:07:27.864 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.864 suites 1 1 n/a 0 0 00:07:27.864 tests 1 1 1 0 0 00:07:27.864 asserts 25 25 25 0 n/a 00:07:27.864 00:07:27.864 Elapsed time = 0.021 seconds 00:07:27.864 00:07:27.864 real 0m0.035s 00:07:27.864 user 0m0.010s 00:07:27.864 sys 0m0.025s 00:07:27.864 12:21:56 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:27.864 12:21:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:27.864 ************************************ 00:07:27.864 END TEST env_pci 00:07:27.864 ************************************ 00:07:27.864 12:21:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:27.864 12:21:57 env -- env/env.sh@15 -- # uname 00:07:27.864 12:21:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:27.864 12:21:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:27.864 12:21:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:27.864 12:21:57 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:27.864 12:21:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:27.864 12:21:57 env -- common/autotest_common.sh@10 -- # set +x 00:07:27.864 ************************************ 00:07:27.864 START TEST env_dpdk_post_init 00:07:27.864 ************************************ 00:07:27.864 12:21:57 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:27.864 EAL: Detected CPU lcores: 48 00:07:27.864 EAL: Detected NUMA nodes: 2 00:07:27.864 EAL: Detected shared linkage of DPDK 00:07:27.864 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:27.864 EAL: Selected IOVA mode 'VA' 00:07:27.864 EAL: VFIO support initialized 00:07:27.864 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:28.123 EAL: Using IOMMU type 1 (Type 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:28.123 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:29.059 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:07:32.337 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:07:32.337 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:07:32.337 Starting DPDK initialization... 00:07:32.337 Starting SPDK post initialization... 00:07:32.337 SPDK NVMe probe 00:07:32.337 Attaching to 0000:88:00.0 00:07:32.337 Attached to 0000:88:00.0 00:07:32.337 Cleaning up... 00:07:32.337 00:07:32.337 real 0m4.421s 00:07:32.337 user 0m3.295s 00:07:32.337 sys 0m0.183s 00:07:32.337 12:22:01 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.337 12:22:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 ************************************ 00:07:32.337 END TEST env_dpdk_post_init 00:07:32.337 ************************************ 00:07:32.337 12:22:01 env -- env/env.sh@26 -- # uname 00:07:32.337 12:22:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:32.337 12:22:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:32.337 12:22:01 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.337 12:22:01 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.337 12:22:01 env -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 ************************************ 00:07:32.337 START TEST env_mem_callbacks 00:07:32.337 ************************************ 00:07:32.337 12:22:01 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:32.337 EAL: Detected CPU lcores: 48 00:07:32.337 EAL: Detected NUMA nodes: 2 00:07:32.337 EAL: Detected shared linkage of DPDK 00:07:32.337 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:32.337 EAL: Selected IOVA mode 'VA' 00:07:32.337 EAL: VFIO support initialized 00:07:32.337 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:32.337 00:07:32.337 00:07:32.337 CUnit - A unit testing framework for C - Version 2.1-3 00:07:32.337 http://cunit.sourceforge.net/ 00:07:32.337 00:07:32.337 00:07:32.337 Suite: memory 00:07:32.337 Test: test ... 00:07:32.337 register 0x200000200000 2097152 00:07:32.337 malloc 3145728 00:07:32.337 register 0x200000400000 4194304 00:07:32.337 buf 0x200000500000 len 3145728 PASSED 00:07:32.337 malloc 64 00:07:32.337 buf 0x2000004fff40 len 64 PASSED 00:07:32.337 malloc 4194304 00:07:32.337 register 0x200000800000 6291456 00:07:32.337 buf 0x200000a00000 len 4194304 PASSED 00:07:32.337 free 0x200000500000 3145728 00:07:32.337 free 0x2000004fff40 64 00:07:32.337 unregister 0x200000400000 4194304 PASSED 00:07:32.337 free 0x200000a00000 4194304 00:07:32.337 unregister 0x200000800000 6291456 PASSED 00:07:32.337 malloc 8388608 00:07:32.337 register 0x200000400000 10485760 00:07:32.337 buf 0x200000600000 len 8388608 PASSED 00:07:32.337 free 0x200000600000 8388608 00:07:32.337 unregister 0x200000400000 10485760 PASSED 00:07:32.337 passed 00:07:32.337 00:07:32.337 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.337 suites 1 1 n/a 0 0 00:07:32.337 tests 1 1 1 0 0 00:07:32.337 asserts 15 15 15 0 n/a 00:07:32.337 00:07:32.337 Elapsed time = 0.005 seconds 00:07:32.337 00:07:32.337 real 0m0.049s 00:07:32.337 user 0m0.013s 00:07:32.337 sys 0m0.036s 00:07:32.337 12:22:01 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.337 12:22:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 ************************************ 00:07:32.337 END TEST env_mem_callbacks 00:07:32.337 ************************************ 00:07:32.594 00:07:32.594 real 0m6.488s 00:07:32.594 user 0m4.514s 00:07:32.594 sys 0m1.018s 00:07:32.594 12:22:01 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.594 12:22:01 env -- common/autotest_common.sh@10 -- # set +x 00:07:32.594 ************************************ 00:07:32.595 END TEST env 00:07:32.595 ************************************ 00:07:32.595 12:22:01 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:32.595 12:22:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.595 12:22:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.595 12:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:32.595 ************************************ 00:07:32.595 START TEST rpc 00:07:32.595 ************************************ 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:32.595 * Looking for test storage... 00:07:32.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.595 12:22:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.595 12:22:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.595 12:22:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.595 12:22:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.595 12:22:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.595 12:22:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:32.595 12:22:01 rpc -- scripts/common.sh@345 -- # : 1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.595 12:22:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.595 12:22:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@353 -- # local d=1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.595 12:22:01 rpc -- scripts/common.sh@355 -- # echo 1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.595 12:22:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@353 -- # local d=2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.595 12:22:01 rpc -- scripts/common.sh@355 -- # echo 2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.595 12:22:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.595 12:22:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.595 12:22:01 rpc -- scripts/common.sh@368 -- # return 0 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.595 --rc genhtml_branch_coverage=1 00:07:32.595 --rc genhtml_function_coverage=1 00:07:32.595 --rc genhtml_legend=1 00:07:32.595 --rc geninfo_all_blocks=1 00:07:32.595 --rc geninfo_unexecuted_blocks=1 00:07:32.595 00:07:32.595 ' 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.595 --rc genhtml_branch_coverage=1 00:07:32.595 --rc genhtml_function_coverage=1 00:07:32.595 --rc genhtml_legend=1 00:07:32.595 --rc geninfo_all_blocks=1 00:07:32.595 --rc geninfo_unexecuted_blocks=1 00:07:32.595 00:07:32.595 ' 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.595 --rc genhtml_branch_coverage=1 00:07:32.595 --rc genhtml_function_coverage=1 00:07:32.595 --rc genhtml_legend=1 00:07:32.595 --rc geninfo_all_blocks=1 00:07:32.595 --rc geninfo_unexecuted_blocks=1 00:07:32.595 00:07:32.595 ' 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.595 --rc genhtml_branch_coverage=1 00:07:32.595 --rc genhtml_function_coverage=1 00:07:32.595 --rc genhtml_legend=1 00:07:32.595 --rc geninfo_all_blocks=1 00:07:32.595 --rc geninfo_unexecuted_blocks=1 00:07:32.595 00:07:32.595 ' 00:07:32.595 12:22:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=513571 00:07:32.595 12:22:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:32.595 12:22:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:32.595 12:22:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 513571 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@833 -- # '[' -z 513571 ']' 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.595 12:22:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.854 [2024-11-05 12:22:01.853400] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:32.854 [2024-11-05 12:22:01.853478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513571 ] 00:07:32.854 [2024-11-05 12:22:01.923784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.854 [2024-11-05 12:22:01.968437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:32.854 [2024-11-05 12:22:01.968498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 513571' to capture a snapshot of events at runtime. 00:07:32.854 [2024-11-05 12:22:01.968526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.854 [2024-11-05 12:22:01.968537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.854 [2024-11-05 12:22:01.968546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid513571 for offline analysis/debug. 00:07:32.854 [2024-11-05 12:22:01.969082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.111 12:22:02 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.111 12:22:02 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:33.111 12:22:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:33.111 12:22:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:33.111 12:22:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:33.111 12:22:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:33.111 12:22:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.111 12:22:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.111 12:22:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 ************************************ 00:07:33.111 START TEST rpc_integrity 00:07:33.111 ************************************ 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.111 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:33.111 { 00:07:33.111 "name": "Malloc0", 00:07:33.111 "aliases": [ 00:07:33.111 "f1c10b7b-079a-4601-819e-058fa39350c0" 00:07:33.111 ], 00:07:33.111 "product_name": "Malloc disk", 00:07:33.111 "block_size": 512, 00:07:33.111 "num_blocks": 16384, 00:07:33.111 "uuid": "f1c10b7b-079a-4601-819e-058fa39350c0", 00:07:33.111 "assigned_rate_limits": { 00:07:33.111 "rw_ios_per_sec": 0, 00:07:33.111 "rw_mbytes_per_sec": 0, 00:07:33.111 "r_mbytes_per_sec": 0, 00:07:33.111 "w_mbytes_per_sec": 0 00:07:33.111 }, 00:07:33.111 "claimed": false, 00:07:33.111 "zoned": false, 00:07:33.111 "supported_io_types": { 00:07:33.111 "read": true, 00:07:33.111 "write": true, 00:07:33.111 "unmap": true, 00:07:33.111 "flush": true, 00:07:33.111 "reset": true, 00:07:33.111 "nvme_admin": false, 00:07:33.111 "nvme_io": false, 00:07:33.111 "nvme_io_md": false, 00:07:33.111 "write_zeroes": true, 00:07:33.111 "zcopy": true, 00:07:33.111 "get_zone_info": false, 00:07:33.111 "zone_management": false, 00:07:33.111 "zone_append": false, 00:07:33.111 "compare": false, 00:07:33.111 "compare_and_write": false, 00:07:33.111 "abort": true, 00:07:33.111 "seek_hole": false, 00:07:33.111 "seek_data": false, 00:07:33.111 "copy": true, 00:07:33.111 "nvme_iov_md": false 00:07:33.111 }, 00:07:33.111 "memory_domains": [ 00:07:33.111 { 00:07:33.111 "dma_device_id": "system", 00:07:33.111 "dma_device_type": 1 00:07:33.111 }, 00:07:33.111 { 00:07:33.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.111 "dma_device_type": 2 00:07:33.111 } 00:07:33.111 ], 00:07:33.111 "driver_specific": {} 00:07:33.111 } 00:07:33.111 ]' 00:07:33.111 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 [2024-11-05 12:22:02.355437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:33.368 [2024-11-05 12:22:02.355493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.368 [2024-11-05 12:22:02.355517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x658b80 00:07:33.368 [2024-11-05 12:22:02.355531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.368 [2024-11-05 12:22:02.356876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.368 [2024-11-05 12:22:02.356902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:33.368 Passthru0 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:33.368 { 00:07:33.368 "name": "Malloc0", 00:07:33.368 "aliases": [ 00:07:33.368 "f1c10b7b-079a-4601-819e-058fa39350c0" 00:07:33.368 ], 00:07:33.368 "product_name": "Malloc disk", 00:07:33.368 "block_size": 512, 00:07:33.368 "num_blocks": 16384, 00:07:33.368 "uuid": "f1c10b7b-079a-4601-819e-058fa39350c0", 00:07:33.368 "assigned_rate_limits": { 00:07:33.368 "rw_ios_per_sec": 0, 00:07:33.368 "rw_mbytes_per_sec": 0, 00:07:33.368 "r_mbytes_per_sec": 0, 00:07:33.368 "w_mbytes_per_sec": 0 00:07:33.368 }, 00:07:33.368 "claimed": true, 00:07:33.368 "claim_type": "exclusive_write", 00:07:33.368 "zoned": false, 00:07:33.368 "supported_io_types": { 00:07:33.368 "read": true, 00:07:33.368 "write": true, 00:07:33.368 "unmap": true, 00:07:33.368 "flush": true, 00:07:33.368 "reset": true, 00:07:33.368 "nvme_admin": false, 00:07:33.368 "nvme_io": false, 00:07:33.368 "nvme_io_md": false, 00:07:33.368 "write_zeroes": true, 00:07:33.368 "zcopy": true, 00:07:33.368 "get_zone_info": false, 00:07:33.368 "zone_management": false, 00:07:33.368 "zone_append": false, 00:07:33.368 "compare": false, 00:07:33.368 "compare_and_write": false, 00:07:33.368 "abort": true, 00:07:33.368 "seek_hole": false, 00:07:33.368 "seek_data": false, 00:07:33.368 "copy": true, 00:07:33.368 "nvme_iov_md": false 00:07:33.368 }, 00:07:33.368 "memory_domains": [ 00:07:33.368 { 00:07:33.368 "dma_device_id": "system", 00:07:33.368 "dma_device_type": 1 00:07:33.368 }, 00:07:33.368 { 00:07:33.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.368 "dma_device_type": 2 00:07:33.368 } 00:07:33.368 ], 00:07:33.368 "driver_specific": {} 00:07:33.368 }, 00:07:33.368 { 00:07:33.368 "name": "Passthru0", 00:07:33.368 "aliases": [ 00:07:33.368 "25eeac1c-8b46-5cad-b783-2f5b05187377" 00:07:33.368 ], 00:07:33.368 "product_name": "passthru", 00:07:33.368 "block_size": 512, 00:07:33.368 "num_blocks": 16384, 00:07:33.368 "uuid": "25eeac1c-8b46-5cad-b783-2f5b05187377", 00:07:33.368 "assigned_rate_limits": { 00:07:33.368 "rw_ios_per_sec": 0, 00:07:33.368 "rw_mbytes_per_sec": 0, 00:07:33.368 "r_mbytes_per_sec": 0, 00:07:33.368 "w_mbytes_per_sec": 0 00:07:33.368 }, 00:07:33.368 "claimed": false, 00:07:33.368 "zoned": false, 00:07:33.368 "supported_io_types": { 00:07:33.368 "read": true, 00:07:33.368 "write": true, 00:07:33.368 "unmap": true, 00:07:33.368 "flush": true, 00:07:33.368 "reset": true, 00:07:33.368 "nvme_admin": false, 00:07:33.368 "nvme_io": false, 00:07:33.368 "nvme_io_md": false, 00:07:33.368 "write_zeroes": true, 00:07:33.368 "zcopy": true, 00:07:33.368 "get_zone_info": false, 00:07:33.368 "zone_management": false, 00:07:33.368 "zone_append": false, 00:07:33.368 "compare": false, 00:07:33.368 "compare_and_write": false, 00:07:33.368 "abort": true, 00:07:33.368 "seek_hole": false, 00:07:33.368 "seek_data": false, 00:07:33.368 "copy": true, 00:07:33.368 "nvme_iov_md": false 00:07:33.368 }, 00:07:33.368 "memory_domains": [ 00:07:33.368 { 00:07:33.368 "dma_device_id": "system", 00:07:33.368 "dma_device_type": 1 00:07:33.368 }, 00:07:33.368 { 00:07:33.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.368 "dma_device_type": 2 00:07:33.368 } 00:07:33.368 ], 00:07:33.368 "driver_specific": { 00:07:33.368 "passthru": { 00:07:33.368 "name": "Passthru0", 00:07:33.368 "base_bdev_name": "Malloc0" 00:07:33.368 } 00:07:33.368 } 00:07:33.368 } 00:07:33.368 ]' 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:33.368 12:22:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:33.368 00:07:33.368 real 0m0.216s 00:07:33.368 user 0m0.140s 00:07:33.368 sys 0m0.023s 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 ************************************ 00:07:33.368 END TEST rpc_integrity 00:07:33.368 ************************************ 00:07:33.368 12:22:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:33.368 12:22:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.368 12:22:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.368 12:22:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 ************************************ 00:07:33.368 START TEST rpc_plugins 00:07:33.368 ************************************ 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:33.368 { 00:07:33.368 "name": "Malloc1", 00:07:33.368 "aliases": [ 00:07:33.368 "00f3770a-c626-497f-b984-58ec440a5f5b" 00:07:33.368 ], 00:07:33.368 "product_name": "Malloc disk", 00:07:33.368 "block_size": 4096, 00:07:33.368 "num_blocks": 256, 00:07:33.368 "uuid": "00f3770a-c626-497f-b984-58ec440a5f5b", 00:07:33.368 "assigned_rate_limits": { 00:07:33.368 "rw_ios_per_sec": 0, 00:07:33.368 "rw_mbytes_per_sec": 0, 00:07:33.368 "r_mbytes_per_sec": 0, 00:07:33.368 "w_mbytes_per_sec": 0 00:07:33.368 }, 00:07:33.368 "claimed": false, 00:07:33.368 "zoned": false, 00:07:33.368 "supported_io_types": { 00:07:33.368 "read": true, 00:07:33.368 "write": true, 00:07:33.368 "unmap": true, 00:07:33.368 "flush": true, 00:07:33.368 "reset": true, 00:07:33.368 "nvme_admin": false, 00:07:33.368 "nvme_io": false, 00:07:33.368 "nvme_io_md": false, 00:07:33.368 "write_zeroes": true, 00:07:33.368 "zcopy": true, 00:07:33.368 "get_zone_info": false, 00:07:33.368 "zone_management": false, 00:07:33.368 "zone_append": false, 00:07:33.368 "compare": false, 00:07:33.368 "compare_and_write": false, 00:07:33.368 "abort": true, 00:07:33.368 "seek_hole": false, 00:07:33.368 "seek_data": false, 00:07:33.368 "copy": true, 00:07:33.368 "nvme_iov_md": false 00:07:33.368 }, 00:07:33.368 "memory_domains": [ 00:07:33.368 { 00:07:33.368 "dma_device_id": "system", 00:07:33.368 "dma_device_type": 1 00:07:33.368 }, 00:07:33.368 { 00:07:33.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.368 "dma_device_type": 2 00:07:33.368 } 00:07:33.368 ], 00:07:33.368 "driver_specific": {} 00:07:33.368 } 00:07:33.368 ]' 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.368 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:33.368 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:33.626 12:22:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:33.626 00:07:33.626 real 0m0.107s 00:07:33.626 user 0m0.070s 00:07:33.626 sys 0m0.009s 00:07:33.626 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.626 12:22:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:33.626 ************************************ 00:07:33.626 END TEST rpc_plugins 00:07:33.626 ************************************ 00:07:33.626 12:22:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:33.626 12:22:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.626 12:22:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.626 12:22:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.626 ************************************ 00:07:33.626 START TEST rpc_trace_cmd_test 00:07:33.626 ************************************ 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:33.626 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid513571", 00:07:33.626 "tpoint_group_mask": "0x8", 00:07:33.626 "iscsi_conn": { 00:07:33.626 "mask": "0x2", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "scsi": { 00:07:33.626 "mask": "0x4", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "bdev": { 00:07:33.626 "mask": "0x8", 00:07:33.626 "tpoint_mask": "0xffffffffffffffff" 00:07:33.626 }, 00:07:33.626 "nvmf_rdma": { 00:07:33.626 "mask": "0x10", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "nvmf_tcp": { 00:07:33.626 "mask": "0x20", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "ftl": { 00:07:33.626 "mask": "0x40", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "blobfs": { 00:07:33.626 "mask": "0x80", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "dsa": { 00:07:33.626 "mask": "0x200", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "thread": { 00:07:33.626 "mask": "0x400", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "nvme_pcie": { 00:07:33.626 "mask": "0x800", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "iaa": { 00:07:33.626 "mask": "0x1000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "nvme_tcp": { 00:07:33.626 "mask": "0x2000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "bdev_nvme": { 00:07:33.626 "mask": "0x4000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "sock": { 00:07:33.626 "mask": "0x8000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "blob": { 00:07:33.626 "mask": "0x10000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "bdev_raid": { 00:07:33.626 "mask": "0x20000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 }, 00:07:33.626 "scheduler": { 00:07:33.626 "mask": "0x40000", 00:07:33.626 "tpoint_mask": "0x0" 00:07:33.626 } 00:07:33.626 }' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:33.626 00:07:33.626 real 0m0.180s 00:07:33.626 user 0m0.158s 00:07:33.626 sys 0m0.015s 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.626 12:22:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.626 ************************************ 00:07:33.626 END TEST rpc_trace_cmd_test 00:07:33.626 ************************************ 00:07:33.883 12:22:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:33.883 12:22:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:33.883 12:22:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:33.883 12:22:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.883 12:22:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.883 12:22:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 ************************************ 00:07:33.883 START TEST rpc_daemon_integrity 00:07:33.883 ************************************ 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.883 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:33.883 { 00:07:33.883 "name": "Malloc2", 00:07:33.883 "aliases": [ 00:07:33.883 "7951ceec-5651-4ca4-ae5f-06584d4094a5" 00:07:33.883 ], 00:07:33.883 "product_name": "Malloc disk", 00:07:33.883 "block_size": 512, 00:07:33.883 "num_blocks": 16384, 00:07:33.883 "uuid": "7951ceec-5651-4ca4-ae5f-06584d4094a5", 00:07:33.883 "assigned_rate_limits": { 00:07:33.883 "rw_ios_per_sec": 0, 00:07:33.883 "rw_mbytes_per_sec": 0, 00:07:33.883 "r_mbytes_per_sec": 0, 00:07:33.884 "w_mbytes_per_sec": 0 00:07:33.884 }, 00:07:33.884 "claimed": false, 00:07:33.884 "zoned": false, 00:07:33.884 "supported_io_types": { 00:07:33.884 "read": true, 00:07:33.884 "write": true, 00:07:33.884 "unmap": true, 00:07:33.884 "flush": true, 00:07:33.884 "reset": true, 00:07:33.884 "nvme_admin": false, 00:07:33.884 "nvme_io": false, 00:07:33.884 "nvme_io_md": false, 00:07:33.884 "write_zeroes": true, 00:07:33.884 "zcopy": true, 00:07:33.884 "get_zone_info": false, 00:07:33.884 "zone_management": false, 00:07:33.884 "zone_append": false, 00:07:33.884 "compare": false, 00:07:33.884 "compare_and_write": false, 00:07:33.884 "abort": true, 00:07:33.884 "seek_hole": false, 00:07:33.884 "seek_data": false, 00:07:33.884 "copy": true, 00:07:33.884 "nvme_iov_md": false 00:07:33.884 }, 00:07:33.884 "memory_domains": [ 00:07:33.884 { 00:07:33.884 "dma_device_id": "system", 00:07:33.884 "dma_device_type": 1 00:07:33.884 }, 00:07:33.884 { 00:07:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.884 "dma_device_type": 2 00:07:33.884 } 00:07:33.884 ], 00:07:33.884 "driver_specific": {} 00:07:33.884 } 00:07:33.884 ]' 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.884 [2024-11-05 12:22:02.993593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:33.884 [2024-11-05 12:22:02.993648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.884 [2024-11-05 12:22:02.993677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6e9790 00:07:33.884 [2024-11-05 12:22:02.993691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.884 [2024-11-05 12:22:02.994886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.884 [2024-11-05 12:22:02.994918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:33.884 Passthru0 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.884 12:22:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:33.884 { 00:07:33.884 "name": "Malloc2", 00:07:33.884 "aliases": [ 00:07:33.884 "7951ceec-5651-4ca4-ae5f-06584d4094a5" 00:07:33.884 ], 00:07:33.884 "product_name": "Malloc disk", 00:07:33.884 "block_size": 512, 00:07:33.884 "num_blocks": 16384, 00:07:33.884 "uuid": "7951ceec-5651-4ca4-ae5f-06584d4094a5", 00:07:33.884 "assigned_rate_limits": { 00:07:33.884 "rw_ios_per_sec": 0, 00:07:33.884 "rw_mbytes_per_sec": 0, 00:07:33.884 "r_mbytes_per_sec": 0, 00:07:33.884 "w_mbytes_per_sec": 0 00:07:33.884 }, 00:07:33.884 "claimed": true, 00:07:33.884 "claim_type": "exclusive_write", 00:07:33.884 "zoned": false, 00:07:33.884 "supported_io_types": { 00:07:33.884 "read": true, 00:07:33.884 "write": true, 00:07:33.884 "unmap": true, 00:07:33.884 "flush": true, 00:07:33.884 "reset": true, 00:07:33.884 "nvme_admin": false, 00:07:33.884 "nvme_io": false, 00:07:33.884 "nvme_io_md": false, 00:07:33.884 "write_zeroes": true, 00:07:33.884 "zcopy": true, 00:07:33.884 "get_zone_info": false, 00:07:33.884 "zone_management": false, 00:07:33.884 "zone_append": false, 00:07:33.884 "compare": false, 00:07:33.884 "compare_and_write": false, 00:07:33.884 "abort": true, 00:07:33.884 "seek_hole": false, 00:07:33.884 "seek_data": false, 00:07:33.884 "copy": true, 00:07:33.884 "nvme_iov_md": false 00:07:33.884 }, 00:07:33.884 "memory_domains": [ 00:07:33.884 { 00:07:33.884 "dma_device_id": "system", 00:07:33.884 "dma_device_type": 1 00:07:33.884 }, 00:07:33.884 { 00:07:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.884 "dma_device_type": 2 00:07:33.884 } 00:07:33.884 ], 00:07:33.884 "driver_specific": {} 00:07:33.884 }, 00:07:33.884 { 00:07:33.884 "name": "Passthru0", 00:07:33.884 "aliases": [ 00:07:33.884 "85e4aa53-24de-5c77-a0b5-13f21879de03" 00:07:33.884 ], 00:07:33.884 "product_name": "passthru", 00:07:33.884 "block_size": 512, 00:07:33.884 "num_blocks": 16384, 00:07:33.884 "uuid": "85e4aa53-24de-5c77-a0b5-13f21879de03", 00:07:33.884 "assigned_rate_limits": { 00:07:33.884 "rw_ios_per_sec": 0, 00:07:33.884 "rw_mbytes_per_sec": 0, 00:07:33.884 "r_mbytes_per_sec": 0, 00:07:33.884 "w_mbytes_per_sec": 0 00:07:33.884 }, 00:07:33.884 "claimed": false, 00:07:33.884 "zoned": false, 00:07:33.884 "supported_io_types": { 00:07:33.884 "read": true, 00:07:33.884 "write": true, 00:07:33.884 "unmap": true, 00:07:33.884 "flush": true, 00:07:33.884 "reset": true, 00:07:33.884 "nvme_admin": false, 00:07:33.884 "nvme_io": false, 00:07:33.884 "nvme_io_md": false, 00:07:33.884 "write_zeroes": true, 00:07:33.884 "zcopy": true, 00:07:33.884 "get_zone_info": false, 00:07:33.884 "zone_management": false, 00:07:33.884 "zone_append": false, 00:07:33.884 "compare": false, 00:07:33.884 "compare_and_write": false, 00:07:33.884 "abort": true, 00:07:33.884 "seek_hole": false, 00:07:33.884 "seek_data": false, 00:07:33.884 "copy": true, 00:07:33.884 "nvme_iov_md": false 00:07:33.884 }, 00:07:33.884 "memory_domains": [ 00:07:33.884 { 00:07:33.884 "dma_device_id": "system", 00:07:33.884 "dma_device_type": 1 00:07:33.884 }, 00:07:33.884 { 00:07:33.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.884 "dma_device_type": 2 00:07:33.884 } 00:07:33.884 ], 00:07:33.884 "driver_specific": { 00:07:33.884 "passthru": { 00:07:33.884 "name": "Passthru0", 00:07:33.884 "base_bdev_name": "Malloc2" 00:07:33.884 } 00:07:33.884 } 00:07:33.884 } 00:07:33.884 ]' 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:33.884 00:07:33.884 real 0m0.211s 00:07:33.884 user 0m0.140s 00:07:33.884 sys 0m0.018s 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.884 12:22:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:33.884 ************************************ 00:07:33.884 END TEST rpc_daemon_integrity 00:07:33.884 ************************************ 00:07:34.141 12:22:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:34.141 12:22:03 rpc -- rpc/rpc.sh@84 -- # killprocess 513571 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@952 -- # '[' -z 513571 ']' 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@956 -- # kill -0 513571 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@957 -- # uname 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 513571 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 513571' 00:07:34.141 killing process with pid 513571 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@971 -- # kill 513571 00:07:34.141 12:22:03 rpc -- common/autotest_common.sh@976 -- # wait 513571 00:07:34.400 00:07:34.400 real 0m1.897s 00:07:34.400 user 0m2.350s 00:07:34.400 sys 0m0.603s 00:07:34.400 12:22:03 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.400 12:22:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.400 ************************************ 00:07:34.400 END TEST rpc 00:07:34.400 ************************************ 00:07:34.400 12:22:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:34.400 12:22:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.400 12:22:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.400 12:22:03 -- common/autotest_common.sh@10 -- # set +x 00:07:34.400 ************************************ 00:07:34.400 START TEST skip_rpc 00:07:34.400 ************************************ 00:07:34.400 12:22:03 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:34.657 * Looking for test storage... 00:07:34.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:34.657 12:22:03 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.657 12:22:03 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.657 12:22:03 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.657 12:22:03 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.657 12:22:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.658 --rc genhtml_branch_coverage=1 00:07:34.658 --rc genhtml_function_coverage=1 00:07:34.658 --rc genhtml_legend=1 00:07:34.658 --rc geninfo_all_blocks=1 00:07:34.658 --rc geninfo_unexecuted_blocks=1 00:07:34.658 00:07:34.658 ' 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.658 --rc genhtml_branch_coverage=1 00:07:34.658 --rc genhtml_function_coverage=1 00:07:34.658 --rc genhtml_legend=1 00:07:34.658 --rc geninfo_all_blocks=1 00:07:34.658 --rc geninfo_unexecuted_blocks=1 00:07:34.658 00:07:34.658 ' 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.658 --rc genhtml_branch_coverage=1 00:07:34.658 --rc genhtml_function_coverage=1 00:07:34.658 --rc genhtml_legend=1 00:07:34.658 --rc geninfo_all_blocks=1 00:07:34.658 --rc geninfo_unexecuted_blocks=1 00:07:34.658 00:07:34.658 ' 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.658 --rc genhtml_branch_coverage=1 00:07:34.658 --rc genhtml_function_coverage=1 00:07:34.658 --rc genhtml_legend=1 00:07:34.658 --rc geninfo_all_blocks=1 00:07:34.658 --rc geninfo_unexecuted_blocks=1 00:07:34.658 00:07:34.658 ' 00:07:34.658 12:22:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:34.658 12:22:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:34.658 12:22:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.658 12:22:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.658 ************************************ 00:07:34.658 START TEST skip_rpc 00:07:34.658 ************************************ 00:07:34.658 12:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:34.658 12:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=513948 00:07:34.658 12:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:34.658 12:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.658 12:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:34.658 [2024-11-05 12:22:03.827202] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:34.658 [2024-11-05 12:22:03.827287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513948 ] 00:07:34.658 [2024-11-05 12:22:03.891004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.915 [2024-11-05 12:22:03.936708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.171 12:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:40.171 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:40.171 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 513948 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 513948 ']' 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 513948 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 513948 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 513948' 00:07:40.172 killing process with pid 513948 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 513948 00:07:40.172 12:22:08 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 513948 00:07:40.172 00:07:40.172 real 0m5.413s 00:07:40.172 user 0m5.130s 00:07:40.172 sys 0m0.289s 00:07:40.172 12:22:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.172 12:22:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.172 ************************************ 00:07:40.172 END TEST skip_rpc 00:07:40.172 ************************************ 00:07:40.172 12:22:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:40.172 12:22:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.172 12:22:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.172 12:22:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.172 ************************************ 00:07:40.172 START TEST skip_rpc_with_json 00:07:40.172 ************************************ 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=514636 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 514636 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 514636 ']' 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.172 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:40.172 [2024-11-05 12:22:09.292157] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:40.172 [2024-11-05 12:22:09.292272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid514636 ] 00:07:40.172 [2024-11-05 12:22:09.358715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.172 [2024-11-05 12:22:09.407707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.430 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.430 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:40.430 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:40.430 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.430 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:40.430 [2024-11-05 12:22:09.667379] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:40.430 request: 00:07:40.430 { 00:07:40.430 "trtype": "tcp", 00:07:40.430 "method": "nvmf_get_transports", 00:07:40.430 "req_id": 1 00:07:40.430 } 00:07:40.430 Got JSON-RPC error response 00:07:40.430 response: 00:07:40.430 { 00:07:40.687 "code": -19, 00:07:40.687 "message": "No such device" 00:07:40.687 } 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:40.687 [2024-11-05 12:22:09.675482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.687 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:40.687 { 00:07:40.687 "subsystems": [ 00:07:40.687 { 00:07:40.687 "subsystem": "fsdev", 00:07:40.687 "config": [ 00:07:40.687 { 00:07:40.687 "method": "fsdev_set_opts", 00:07:40.687 "params": { 00:07:40.687 "fsdev_io_pool_size": 65535, 00:07:40.687 "fsdev_io_cache_size": 256 00:07:40.687 } 00:07:40.687 } 00:07:40.687 ] 00:07:40.687 }, 00:07:40.687 { 00:07:40.687 "subsystem": "vfio_user_target", 00:07:40.687 "config": null 00:07:40.687 }, 00:07:40.687 { 00:07:40.688 "subsystem": "keyring", 00:07:40.688 "config": [] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "iobuf", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "iobuf_set_options", 00:07:40.688 "params": { 00:07:40.688 "small_pool_count": 8192, 00:07:40.688 "large_pool_count": 1024, 00:07:40.688 "small_bufsize": 8192, 00:07:40.688 "large_bufsize": 135168, 00:07:40.688 "enable_numa": false 00:07:40.688 } 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "sock", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "sock_set_default_impl", 00:07:40.688 "params": { 00:07:40.688 "impl_name": "posix" 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "sock_impl_set_options", 00:07:40.688 "params": { 00:07:40.688 "impl_name": "ssl", 00:07:40.688 "recv_buf_size": 4096, 00:07:40.688 "send_buf_size": 4096, 00:07:40.688 "enable_recv_pipe": true, 00:07:40.688 "enable_quickack": false, 00:07:40.688 "enable_placement_id": 0, 00:07:40.688 "enable_zerocopy_send_server": true, 00:07:40.688 "enable_zerocopy_send_client": false, 00:07:40.688 "zerocopy_threshold": 0, 00:07:40.688 "tls_version": 0, 00:07:40.688 "enable_ktls": false 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "sock_impl_set_options", 00:07:40.688 "params": { 00:07:40.688 "impl_name": "posix", 00:07:40.688 "recv_buf_size": 2097152, 00:07:40.688 "send_buf_size": 2097152, 00:07:40.688 "enable_recv_pipe": true, 00:07:40.688 "enable_quickack": false, 00:07:40.688 "enable_placement_id": 0, 00:07:40.688 "enable_zerocopy_send_server": true, 00:07:40.688 "enable_zerocopy_send_client": false, 00:07:40.688 "zerocopy_threshold": 0, 00:07:40.688 "tls_version": 0, 00:07:40.688 "enable_ktls": false 00:07:40.688 } 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "vmd", 00:07:40.688 "config": [] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "accel", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "accel_set_options", 00:07:40.688 "params": { 00:07:40.688 "small_cache_size": 128, 00:07:40.688 "large_cache_size": 16, 00:07:40.688 "task_count": 2048, 00:07:40.688 "sequence_count": 2048, 00:07:40.688 "buf_count": 2048 00:07:40.688 } 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "bdev", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "bdev_set_options", 00:07:40.688 "params": { 00:07:40.688 "bdev_io_pool_size": 65535, 00:07:40.688 "bdev_io_cache_size": 256, 00:07:40.688 "bdev_auto_examine": true, 00:07:40.688 "iobuf_small_cache_size": 128, 00:07:40.688 "iobuf_large_cache_size": 16 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "bdev_raid_set_options", 00:07:40.688 "params": { 00:07:40.688 "process_window_size_kb": 1024, 00:07:40.688 "process_max_bandwidth_mb_sec": 0 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "bdev_iscsi_set_options", 00:07:40.688 "params": { 00:07:40.688 "timeout_sec": 30 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "bdev_nvme_set_options", 00:07:40.688 "params": { 00:07:40.688 "action_on_timeout": "none", 00:07:40.688 "timeout_us": 0, 00:07:40.688 "timeout_admin_us": 0, 00:07:40.688 "keep_alive_timeout_ms": 10000, 00:07:40.688 "arbitration_burst": 0, 00:07:40.688 "low_priority_weight": 0, 00:07:40.688 "medium_priority_weight": 0, 00:07:40.688 "high_priority_weight": 0, 00:07:40.688 "nvme_adminq_poll_period_us": 10000, 00:07:40.688 "nvme_ioq_poll_period_us": 0, 00:07:40.688 "io_queue_requests": 0, 00:07:40.688 "delay_cmd_submit": true, 00:07:40.688 "transport_retry_count": 4, 00:07:40.688 "bdev_retry_count": 3, 00:07:40.688 "transport_ack_timeout": 0, 00:07:40.688 "ctrlr_loss_timeout_sec": 0, 00:07:40.688 "reconnect_delay_sec": 0, 00:07:40.688 "fast_io_fail_timeout_sec": 0, 00:07:40.688 "disable_auto_failback": false, 00:07:40.688 "generate_uuids": false, 00:07:40.688 "transport_tos": 0, 00:07:40.688 "nvme_error_stat": false, 00:07:40.688 "rdma_srq_size": 0, 00:07:40.688 "io_path_stat": false, 00:07:40.688 "allow_accel_sequence": false, 00:07:40.688 "rdma_max_cq_size": 0, 00:07:40.688 "rdma_cm_event_timeout_ms": 0, 00:07:40.688 "dhchap_digests": [ 00:07:40.688 "sha256", 00:07:40.688 "sha384", 00:07:40.688 "sha512" 00:07:40.688 ], 00:07:40.688 "dhchap_dhgroups": [ 00:07:40.688 "null", 00:07:40.688 "ffdhe2048", 00:07:40.688 "ffdhe3072", 00:07:40.688 "ffdhe4096", 00:07:40.688 "ffdhe6144", 00:07:40.688 "ffdhe8192" 00:07:40.688 ] 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "bdev_nvme_set_hotplug", 00:07:40.688 "params": { 00:07:40.688 "period_us": 100000, 00:07:40.688 "enable": false 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "bdev_wait_for_examine" 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "scsi", 00:07:40.688 "config": null 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "scheduler", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "framework_set_scheduler", 00:07:40.688 "params": { 00:07:40.688 "name": "static" 00:07:40.688 } 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "vhost_scsi", 00:07:40.688 "config": [] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "vhost_blk", 00:07:40.688 "config": [] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "ublk", 00:07:40.688 "config": [] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "nbd", 00:07:40.688 "config": [] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "nvmf", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "nvmf_set_config", 00:07:40.688 "params": { 00:07:40.688 "discovery_filter": "match_any", 00:07:40.688 "admin_cmd_passthru": { 00:07:40.688 "identify_ctrlr": false 00:07:40.688 }, 00:07:40.688 "dhchap_digests": [ 00:07:40.688 "sha256", 00:07:40.688 "sha384", 00:07:40.688 "sha512" 00:07:40.688 ], 00:07:40.688 "dhchap_dhgroups": [ 00:07:40.688 "null", 00:07:40.688 "ffdhe2048", 00:07:40.688 "ffdhe3072", 00:07:40.688 "ffdhe4096", 00:07:40.688 "ffdhe6144", 00:07:40.688 "ffdhe8192" 00:07:40.688 ] 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "nvmf_set_max_subsystems", 00:07:40.688 "params": { 00:07:40.688 "max_subsystems": 1024 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "nvmf_set_crdt", 00:07:40.688 "params": { 00:07:40.688 "crdt1": 0, 00:07:40.688 "crdt2": 0, 00:07:40.688 "crdt3": 0 00:07:40.688 } 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "method": "nvmf_create_transport", 00:07:40.688 "params": { 00:07:40.688 "trtype": "TCP", 00:07:40.688 "max_queue_depth": 128, 00:07:40.688 "max_io_qpairs_per_ctrlr": 127, 00:07:40.688 "in_capsule_data_size": 4096, 00:07:40.688 "max_io_size": 131072, 00:07:40.688 "io_unit_size": 131072, 00:07:40.688 "max_aq_depth": 128, 00:07:40.688 "num_shared_buffers": 511, 00:07:40.688 "buf_cache_size": 4294967295, 00:07:40.688 "dif_insert_or_strip": false, 00:07:40.688 "zcopy": false, 00:07:40.688 "c2h_success": true, 00:07:40.688 "sock_priority": 0, 00:07:40.688 "abort_timeout_sec": 1, 00:07:40.688 "ack_timeout": 0, 00:07:40.688 "data_wr_pool_size": 0 00:07:40.688 } 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 }, 00:07:40.688 { 00:07:40.688 "subsystem": "iscsi", 00:07:40.688 "config": [ 00:07:40.688 { 00:07:40.688 "method": "iscsi_set_options", 00:07:40.688 "params": { 00:07:40.688 "node_base": "iqn.2016-06.io.spdk", 00:07:40.688 "max_sessions": 128, 00:07:40.688 "max_connections_per_session": 2, 00:07:40.688 "max_queue_depth": 64, 00:07:40.688 "default_time2wait": 2, 00:07:40.688 "default_time2retain": 20, 00:07:40.688 "first_burst_length": 8192, 00:07:40.688 "immediate_data": true, 00:07:40.688 "allow_duplicated_isid": false, 00:07:40.688 "error_recovery_level": 0, 00:07:40.688 "nop_timeout": 60, 00:07:40.688 "nop_in_interval": 30, 00:07:40.688 "disable_chap": false, 00:07:40.688 "require_chap": false, 00:07:40.688 "mutual_chap": false, 00:07:40.688 "chap_group": 0, 00:07:40.688 "max_large_datain_per_connection": 64, 00:07:40.688 "max_r2t_per_connection": 4, 00:07:40.688 "pdu_pool_size": 36864, 00:07:40.688 "immediate_data_pool_size": 16384, 00:07:40.688 "data_out_pool_size": 2048 00:07:40.688 } 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 } 00:07:40.688 ] 00:07:40.688 } 00:07:40.688 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:40.688 12:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 514636 00:07:40.688 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 514636 ']' 00:07:40.688 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 514636 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 514636 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 514636' 00:07:40.689 killing process with pid 514636 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 514636 00:07:40.689 12:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 514636 00:07:41.253 12:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=514776 00:07:41.253 12:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:41.253 12:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 514776 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 514776 ']' 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 514776 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 514776 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 514776' 00:07:46.513 killing process with pid 514776 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 514776 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 514776 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:46.513 00:07:46.513 real 0m6.432s 00:07:46.513 user 0m6.080s 00:07:46.513 sys 0m0.677s 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:46.513 ************************************ 00:07:46.513 END TEST skip_rpc_with_json 00:07:46.513 ************************************ 00:07:46.513 12:22:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:46.513 12:22:15 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.513 12:22:15 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.513 12:22:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.513 ************************************ 00:07:46.513 START TEST skip_rpc_with_delay 00:07:46.513 ************************************ 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:46.513 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:46.784 [2024-11-05 12:22:15.778546] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:46.784 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:46.784 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.784 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.784 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.784 00:07:46.784 real 0m0.076s 00:07:46.784 user 0m0.052s 00:07:46.784 sys 0m0.023s 00:07:46.784 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.784 12:22:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:46.784 ************************************ 00:07:46.784 END TEST skip_rpc_with_delay 00:07:46.784 ************************************ 00:07:46.784 12:22:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:46.784 12:22:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:46.784 12:22:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:46.784 12:22:15 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:46.784 12:22:15 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.784 12:22:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.784 ************************************ 00:07:46.784 START TEST exit_on_failed_rpc_init 00:07:46.784 ************************************ 00:07:46.784 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:46.784 12:22:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=515501 00:07:46.784 12:22:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.784 12:22:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 515501 00:07:46.784 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 515501 ']' 00:07:46.784 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.785 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.785 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.785 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.785 12:22:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:46.785 [2024-11-05 12:22:15.899968] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:46.785 [2024-11-05 12:22:15.900083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515501 ] 00:07:46.785 [2024-11-05 12:22:15.966674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.105 [2024-11-05 12:22:16.017752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:47.106 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:47.106 [2024-11-05 12:22:16.324368] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:47.106 [2024-11-05 12:22:16.324469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515507 ] 00:07:47.415 [2024-11-05 12:22:16.394184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.415 [2024-11-05 12:22:16.441369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.415 [2024-11-05 12:22:16.441498] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:47.415 [2024-11-05 12:22:16.441517] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:47.415 [2024-11-05 12:22:16.441529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 515501 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 515501 ']' 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 515501 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 515501 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 515501' 00:07:47.415 killing process with pid 515501 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 515501 00:07:47.415 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 515501 00:07:47.700 00:07:47.700 real 0m1.064s 00:07:47.700 user 0m1.159s 00:07:47.700 sys 0m0.421s 00:07:47.700 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.700 12:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:47.700 ************************************ 00:07:47.700 END TEST exit_on_failed_rpc_init 00:07:47.700 ************************************ 00:07:47.959 12:22:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:47.959 00:07:47.959 real 0m13.358s 00:07:47.959 user 0m12.600s 00:07:47.959 sys 0m1.610s 00:07:47.959 12:22:16 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.959 12:22:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.959 ************************************ 00:07:47.959 END TEST skip_rpc 00:07:47.959 ************************************ 00:07:47.959 12:22:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:47.959 12:22:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.959 12:22:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.959 12:22:16 -- common/autotest_common.sh@10 -- # set +x 00:07:47.959 ************************************ 00:07:47.959 START TEST rpc_client 00:07:47.959 ************************************ 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:47.959 * Looking for test storage... 00:07:47.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.959 12:22:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.959 --rc genhtml_branch_coverage=1 00:07:47.959 --rc genhtml_function_coverage=1 00:07:47.959 --rc genhtml_legend=1 00:07:47.959 --rc geninfo_all_blocks=1 00:07:47.959 --rc geninfo_unexecuted_blocks=1 00:07:47.959 00:07:47.959 ' 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.959 --rc genhtml_branch_coverage=1 00:07:47.959 --rc genhtml_function_coverage=1 00:07:47.959 --rc genhtml_legend=1 00:07:47.959 --rc geninfo_all_blocks=1 00:07:47.959 --rc geninfo_unexecuted_blocks=1 00:07:47.959 00:07:47.959 ' 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.959 --rc genhtml_branch_coverage=1 00:07:47.959 --rc genhtml_function_coverage=1 00:07:47.959 --rc genhtml_legend=1 00:07:47.959 --rc geninfo_all_blocks=1 00:07:47.959 --rc geninfo_unexecuted_blocks=1 00:07:47.959 00:07:47.959 ' 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.959 --rc genhtml_branch_coverage=1 00:07:47.959 --rc genhtml_function_coverage=1 00:07:47.959 --rc genhtml_legend=1 00:07:47.959 --rc geninfo_all_blocks=1 00:07:47.959 --rc geninfo_unexecuted_blocks=1 00:07:47.959 00:07:47.959 ' 00:07:47.959 12:22:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:47.959 OK 00:07:47.959 12:22:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:47.959 00:07:47.959 real 0m0.166s 00:07:47.959 user 0m0.108s 00:07:47.959 sys 0m0.066s 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.959 12:22:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:47.959 ************************************ 00:07:47.959 END TEST rpc_client 00:07:47.959 ************************************ 00:07:47.959 12:22:17 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:47.959 12:22:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.959 12:22:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.959 12:22:17 -- common/autotest_common.sh@10 -- # set +x 00:07:48.218 ************************************ 00:07:48.218 START TEST json_config 00:07:48.218 ************************************ 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.218 12:22:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.218 12:22:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.218 12:22:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.218 12:22:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.218 12:22:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.218 12:22:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:48.218 12:22:17 json_config -- scripts/common.sh@345 -- # : 1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.218 12:22:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.218 12:22:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@353 -- # local d=1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.218 12:22:17 json_config -- scripts/common.sh@355 -- # echo 1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.218 12:22:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@353 -- # local d=2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.218 12:22:17 json_config -- scripts/common.sh@355 -- # echo 2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.218 12:22:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.218 12:22:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.218 12:22:17 json_config -- scripts/common.sh@368 -- # return 0 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.218 --rc genhtml_branch_coverage=1 00:07:48.218 --rc genhtml_function_coverage=1 00:07:48.218 --rc genhtml_legend=1 00:07:48.218 --rc geninfo_all_blocks=1 00:07:48.218 --rc geninfo_unexecuted_blocks=1 00:07:48.218 00:07:48.218 ' 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.218 --rc genhtml_branch_coverage=1 00:07:48.218 --rc genhtml_function_coverage=1 00:07:48.218 --rc genhtml_legend=1 00:07:48.218 --rc geninfo_all_blocks=1 00:07:48.218 --rc geninfo_unexecuted_blocks=1 00:07:48.218 00:07:48.218 ' 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.218 --rc genhtml_branch_coverage=1 00:07:48.218 --rc genhtml_function_coverage=1 00:07:48.218 --rc genhtml_legend=1 00:07:48.218 --rc geninfo_all_blocks=1 00:07:48.218 --rc geninfo_unexecuted_blocks=1 00:07:48.218 00:07:48.218 ' 00:07:48.218 12:22:17 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.218 --rc genhtml_branch_coverage=1 00:07:48.218 --rc genhtml_function_coverage=1 00:07:48.218 --rc genhtml_legend=1 00:07:48.218 --rc geninfo_all_blocks=1 00:07:48.218 --rc geninfo_unexecuted_blocks=1 00:07:48.218 00:07:48.218 ' 00:07:48.218 12:22:17 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.218 12:22:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:48.218 12:22:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.218 12:22:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.218 12:22:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.218 12:22:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.218 12:22:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.219 12:22:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.219 12:22:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.219 12:22:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.219 12:22:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.219 12:22:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.219 12:22:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.219 12:22:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.219 12:22:17 json_config -- paths/export.sh@5 -- # export PATH 00:07:48.219 12:22:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@51 -- # : 0 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.219 12:22:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:48.219 INFO: JSON configuration test init 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 12:22:17 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:48.219 12:22:17 json_config -- json_config/common.sh@9 -- # local app=target 00:07:48.219 12:22:17 json_config -- json_config/common.sh@10 -- # shift 00:07:48.219 12:22:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:48.219 12:22:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:48.219 12:22:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:48.219 12:22:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:48.219 12:22:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:48.219 12:22:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=515771 00:07:48.219 12:22:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:48.219 Waiting for target to run... 00:07:48.219 12:22:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:48.219 12:22:17 json_config -- json_config/common.sh@25 -- # waitforlisten 515771 /var/tmp/spdk_tgt.sock 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@833 -- # '[' -z 515771 ']' 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:48.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:48.219 12:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 [2024-11-05 12:22:17.420962] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:48.219 [2024-11-05 12:22:17.421067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515771 ] 00:07:48.786 [2024-11-05 12:22:17.756942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.786 [2024-11-05 12:22:17.788200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.351 12:22:18 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:49.351 12:22:18 json_config -- common/autotest_common.sh@866 -- # return 0 00:07:49.351 12:22:18 json_config -- json_config/common.sh@26 -- # echo '' 00:07:49.351 00:07:49.351 12:22:18 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:49.351 12:22:18 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:49.351 12:22:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.351 12:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 12:22:18 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:49.351 12:22:18 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:49.351 12:22:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.351 12:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:49.351 12:22:18 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:49.351 12:22:18 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:49.351 12:22:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:52.633 12:22:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.633 12:22:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:52.633 12:22:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@54 -- # sort 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:52.633 12:22:21 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:52.633 12:22:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.633 12:22:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:52.891 12:22:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.891 12:22:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:52.891 12:22:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:52.891 12:22:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:53.149 MallocForNvmf0 00:07:53.149 12:22:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:53.149 12:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:53.405 MallocForNvmf1 00:07:53.405 12:22:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:53.405 12:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:53.662 [2024-11-05 12:22:22.673252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.662 12:22:22 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.662 12:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.920 12:22:22 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:53.920 12:22:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:54.178 12:22:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:54.178 12:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:54.435 12:22:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:54.435 12:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:54.693 [2024-11-05 12:22:23.744764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:54.693 12:22:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:54.693 12:22:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.693 12:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.693 12:22:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:54.693 12:22:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.693 12:22:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.693 12:22:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:54.693 12:22:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:54.693 12:22:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:54.951 MallocBdevForConfigChangeCheck 00:07:54.951 12:22:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:54.951 12:22:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:54.951 12:22:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.951 12:22:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:54.951 12:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:55.516 12:22:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:55.516 INFO: shutting down applications... 00:07:55.516 12:22:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:55.516 12:22:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:55.516 12:22:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:55.516 12:22:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:56.887 Calling clear_iscsi_subsystem 00:07:56.887 Calling clear_nvmf_subsystem 00:07:56.887 Calling clear_nbd_subsystem 00:07:56.887 Calling clear_ublk_subsystem 00:07:56.887 Calling clear_vhost_blk_subsystem 00:07:56.887 Calling clear_vhost_scsi_subsystem 00:07:56.887 Calling clear_bdev_subsystem 00:07:56.887 12:22:26 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:56.887 12:22:26 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:56.887 12:22:26 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:56.887 12:22:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:56.887 12:22:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:56.887 12:22:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:57.452 12:22:26 json_config -- json_config/json_config.sh@352 -- # break 00:07:57.452 12:22:26 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:57.452 12:22:26 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:57.452 12:22:26 json_config -- json_config/common.sh@31 -- # local app=target 00:07:57.452 12:22:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:57.452 12:22:26 json_config -- json_config/common.sh@35 -- # [[ -n 515771 ]] 00:07:57.452 12:22:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 515771 00:07:57.452 12:22:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:57.452 12:22:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:57.452 12:22:26 json_config -- json_config/common.sh@41 -- # kill -0 515771 00:07:57.452 12:22:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:58.022 12:22:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:58.022 12:22:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:58.022 12:22:27 json_config -- json_config/common.sh@41 -- # kill -0 515771 00:07:58.022 12:22:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:58.022 12:22:27 json_config -- json_config/common.sh@43 -- # break 00:07:58.022 12:22:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:58.022 12:22:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:58.022 SPDK target shutdown done 00:07:58.022 12:22:27 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:58.022 INFO: relaunching applications... 00:07:58.022 12:22:27 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:58.022 12:22:27 json_config -- json_config/common.sh@9 -- # local app=target 00:07:58.022 12:22:27 json_config -- json_config/common.sh@10 -- # shift 00:07:58.022 12:22:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:58.022 12:22:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:58.022 12:22:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:58.022 12:22:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:58.022 12:22:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:58.022 12:22:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=517091 00:07:58.022 12:22:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:58.022 12:22:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:58.022 Waiting for target to run... 00:07:58.022 12:22:27 json_config -- json_config/common.sh@25 -- # waitforlisten 517091 /var/tmp/spdk_tgt.sock 00:07:58.022 12:22:27 json_config -- common/autotest_common.sh@833 -- # '[' -z 517091 ']' 00:07:58.022 12:22:27 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:58.022 12:22:27 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.022 12:22:27 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:58.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:58.022 12:22:27 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.022 12:22:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.022 [2024-11-05 12:22:27.089269] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:07:58.022 [2024-11-05 12:22:27.089346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517091 ] 00:07:58.590 [2024-11-05 12:22:27.592911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.590 [2024-11-05 12:22:27.634134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.877 [2024-11-05 12:22:30.677754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.877 [2024-11-05 12:22:30.710237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:01.877 12:22:30 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.877 12:22:30 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:01.877 12:22:30 json_config -- json_config/common.sh@26 -- # echo '' 00:08:01.877 00:08:01.877 12:22:30 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:01.877 12:22:30 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:01.877 INFO: Checking if target configuration is the same... 00:08:01.877 12:22:30 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:01.877 12:22:30 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:01.877 12:22:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:01.877 + '[' 2 -ne 2 ']' 00:08:01.877 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:01.877 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:01.877 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:01.877 +++ basename /dev/fd/62 00:08:01.877 ++ mktemp /tmp/62.XXX 00:08:01.877 + tmp_file_1=/tmp/62.QzT 00:08:01.877 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:01.877 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:01.877 + tmp_file_2=/tmp/spdk_tgt_config.json.sb8 00:08:01.877 + ret=0 00:08:01.877 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:02.135 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:02.135 + diff -u /tmp/62.QzT /tmp/spdk_tgt_config.json.sb8 00:08:02.135 + echo 'INFO: JSON config files are the same' 00:08:02.135 INFO: JSON config files are the same 00:08:02.135 + rm /tmp/62.QzT /tmp/spdk_tgt_config.json.sb8 00:08:02.135 + exit 0 00:08:02.135 12:22:31 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:02.135 12:22:31 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:02.135 INFO: changing configuration and checking if this can be detected... 00:08:02.135 12:22:31 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:02.135 12:22:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:02.393 12:22:31 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:02.393 12:22:31 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:02.393 12:22:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:02.393 + '[' 2 -ne 2 ']' 00:08:02.393 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:02.393 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:02.393 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:02.393 +++ basename /dev/fd/62 00:08:02.393 ++ mktemp /tmp/62.XXX 00:08:02.393 + tmp_file_1=/tmp/62.qJG 00:08:02.393 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:02.393 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:02.393 + tmp_file_2=/tmp/spdk_tgt_config.json.Xuh 00:08:02.393 + ret=0 00:08:02.393 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:02.651 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:02.909 + diff -u /tmp/62.qJG /tmp/spdk_tgt_config.json.Xuh 00:08:02.909 + ret=1 00:08:02.909 + echo '=== Start of file: /tmp/62.qJG ===' 00:08:02.909 + cat /tmp/62.qJG 00:08:02.909 + echo '=== End of file: /tmp/62.qJG ===' 00:08:02.909 + echo '' 00:08:02.909 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Xuh ===' 00:08:02.909 + cat /tmp/spdk_tgt_config.json.Xuh 00:08:02.909 + echo '=== End of file: /tmp/spdk_tgt_config.json.Xuh ===' 00:08:02.909 + echo '' 00:08:02.909 + rm /tmp/62.qJG /tmp/spdk_tgt_config.json.Xuh 00:08:02.909 + exit 1 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:02.909 INFO: configuration change detected. 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@324 -- # [[ -n 517091 ]] 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 12:22:31 json_config -- json_config/json_config.sh@330 -- # killprocess 517091 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@952 -- # '[' -z 517091 ']' 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@956 -- # kill -0 517091 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@957 -- # uname 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 517091 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:02.909 12:22:31 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:02.909 12:22:32 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 517091' 00:08:02.909 killing process with pid 517091 00:08:02.909 12:22:32 json_config -- common/autotest_common.sh@971 -- # kill 517091 00:08:02.909 12:22:32 json_config -- common/autotest_common.sh@976 -- # wait 517091 00:08:04.806 12:22:33 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:04.806 12:22:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:04.806 12:22:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:04.806 12:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.806 12:22:33 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:04.806 12:22:33 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:04.806 INFO: Success 00:08:04.806 00:08:04.806 real 0m16.371s 00:08:04.806 user 0m18.419s 00:08:04.806 sys 0m2.055s 00:08:04.806 12:22:33 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.806 12:22:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.806 ************************************ 00:08:04.806 END TEST json_config 00:08:04.806 ************************************ 00:08:04.806 12:22:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:04.806 12:22:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.806 12:22:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.806 12:22:33 -- common/autotest_common.sh@10 -- # set +x 00:08:04.806 ************************************ 00:08:04.806 START TEST json_config_extra_key 00:08:04.806 ************************************ 00:08:04.806 12:22:33 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:04.806 12:22:33 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:04.806 12:22:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:04.806 12:22:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:04.806 12:22:33 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.806 12:22:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:04.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.807 --rc genhtml_branch_coverage=1 00:08:04.807 --rc genhtml_function_coverage=1 00:08:04.807 --rc genhtml_legend=1 00:08:04.807 --rc geninfo_all_blocks=1 00:08:04.807 --rc geninfo_unexecuted_blocks=1 00:08:04.807 00:08:04.807 ' 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:04.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.807 --rc genhtml_branch_coverage=1 00:08:04.807 --rc genhtml_function_coverage=1 00:08:04.807 --rc genhtml_legend=1 00:08:04.807 --rc geninfo_all_blocks=1 00:08:04.807 --rc geninfo_unexecuted_blocks=1 00:08:04.807 00:08:04.807 ' 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:04.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.807 --rc genhtml_branch_coverage=1 00:08:04.807 --rc genhtml_function_coverage=1 00:08:04.807 --rc genhtml_legend=1 00:08:04.807 --rc geninfo_all_blocks=1 00:08:04.807 --rc geninfo_unexecuted_blocks=1 00:08:04.807 00:08:04.807 ' 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:04.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.807 --rc genhtml_branch_coverage=1 00:08:04.807 --rc genhtml_function_coverage=1 00:08:04.807 --rc genhtml_legend=1 00:08:04.807 --rc geninfo_all_blocks=1 00:08:04.807 --rc geninfo_unexecuted_blocks=1 00:08:04.807 00:08:04.807 ' 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.807 12:22:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.807 12:22:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.807 12:22:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.807 12:22:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.807 12:22:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.807 12:22:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.807 12:22:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.807 12:22:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:04.807 12:22:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.807 12:22:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:04.807 INFO: launching applications... 00:08:04.807 12:22:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=518007 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:04.807 Waiting for target to run... 00:08:04.807 12:22:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 518007 /var/tmp/spdk_tgt.sock 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 518007 ']' 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:04.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.807 12:22:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:04.807 [2024-11-05 12:22:33.834830] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:04.807 [2024-11-05 12:22:33.834923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518007 ] 00:08:05.374 [2024-11-05 12:22:34.348942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.374 [2024-11-05 12:22:34.391029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.631 12:22:34 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.631 12:22:34 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:05.631 00:08:05.631 12:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:05.631 INFO: shutting down applications... 00:08:05.631 12:22:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 518007 ]] 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 518007 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 518007 00:08:05.631 12:22:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 518007 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:06.196 12:22:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:06.196 SPDK target shutdown done 00:08:06.196 12:22:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:06.196 Success 00:08:06.196 00:08:06.196 real 0m1.681s 00:08:06.196 user 0m1.467s 00:08:06.196 sys 0m0.609s 00:08:06.196 12:22:35 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.196 12:22:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:06.196 ************************************ 00:08:06.196 END TEST json_config_extra_key 00:08:06.196 ************************************ 00:08:06.196 12:22:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:06.196 12:22:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.196 12:22:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.196 12:22:35 -- common/autotest_common.sh@10 -- # set +x 00:08:06.196 ************************************ 00:08:06.196 START TEST alias_rpc 00:08:06.196 ************************************ 00:08:06.196 12:22:35 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:06.196 * Looking for test storage... 00:08:06.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:06.196 12:22:35 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:06.196 12:22:35 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:06.196 12:22:35 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:06.454 12:22:35 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.454 12:22:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.455 12:22:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:06.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.455 --rc genhtml_branch_coverage=1 00:08:06.455 --rc genhtml_function_coverage=1 00:08:06.455 --rc genhtml_legend=1 00:08:06.455 --rc geninfo_all_blocks=1 00:08:06.455 --rc geninfo_unexecuted_blocks=1 00:08:06.455 00:08:06.455 ' 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:06.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.455 --rc genhtml_branch_coverage=1 00:08:06.455 --rc genhtml_function_coverage=1 00:08:06.455 --rc genhtml_legend=1 00:08:06.455 --rc geninfo_all_blocks=1 00:08:06.455 --rc geninfo_unexecuted_blocks=1 00:08:06.455 00:08:06.455 ' 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:06.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.455 --rc genhtml_branch_coverage=1 00:08:06.455 --rc genhtml_function_coverage=1 00:08:06.455 --rc genhtml_legend=1 00:08:06.455 --rc geninfo_all_blocks=1 00:08:06.455 --rc geninfo_unexecuted_blocks=1 00:08:06.455 00:08:06.455 ' 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:06.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.455 --rc genhtml_branch_coverage=1 00:08:06.455 --rc genhtml_function_coverage=1 00:08:06.455 --rc genhtml_legend=1 00:08:06.455 --rc geninfo_all_blocks=1 00:08:06.455 --rc geninfo_unexecuted_blocks=1 00:08:06.455 00:08:06.455 ' 00:08:06.455 12:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:06.455 12:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=518212 00:08:06.455 12:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:06.455 12:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 518212 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 518212 ']' 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.455 12:22:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.455 [2024-11-05 12:22:35.573244] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:06.455 [2024-11-05 12:22:35.573350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518212 ] 00:08:06.455 [2024-11-05 12:22:35.642505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.455 [2024-11-05 12:22:35.690804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.713 12:22:35 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.713 12:22:35 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:06.713 12:22:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:07.277 12:22:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 518212 00:08:07.277 12:22:36 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 518212 ']' 00:08:07.277 12:22:36 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 518212 00:08:07.277 12:22:36 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 518212 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 518212' 00:08:07.278 killing process with pid 518212 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@971 -- # kill 518212 00:08:07.278 12:22:36 alias_rpc -- common/autotest_common.sh@976 -- # wait 518212 00:08:07.535 00:08:07.535 real 0m1.257s 00:08:07.535 user 0m1.364s 00:08:07.535 sys 0m0.447s 00:08:07.535 12:22:36 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.535 12:22:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.535 ************************************ 00:08:07.535 END TEST alias_rpc 00:08:07.535 ************************************ 00:08:07.535 12:22:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:07.535 12:22:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:07.535 12:22:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.535 12:22:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.535 12:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:07.535 ************************************ 00:08:07.535 START TEST spdkcli_tcp 00:08:07.535 ************************************ 00:08:07.535 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:07.535 * Looking for test storage... 00:08:07.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:07.535 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.535 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.535 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.793 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.793 12:22:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:07.793 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.793 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.793 --rc genhtml_branch_coverage=1 00:08:07.793 --rc genhtml_function_coverage=1 00:08:07.793 --rc genhtml_legend=1 00:08:07.793 --rc geninfo_all_blocks=1 00:08:07.793 --rc geninfo_unexecuted_blocks=1 00:08:07.793 00:08:07.793 ' 00:08:07.793 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.793 --rc genhtml_branch_coverage=1 00:08:07.793 --rc genhtml_function_coverage=1 00:08:07.793 --rc genhtml_legend=1 00:08:07.793 --rc geninfo_all_blocks=1 00:08:07.793 --rc geninfo_unexecuted_blocks=1 00:08:07.793 00:08:07.793 ' 00:08:07.793 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.793 --rc genhtml_branch_coverage=1 00:08:07.793 --rc genhtml_function_coverage=1 00:08:07.793 --rc genhtml_legend=1 00:08:07.793 --rc geninfo_all_blocks=1 00:08:07.793 --rc geninfo_unexecuted_blocks=1 00:08:07.793 00:08:07.793 ' 00:08:07.793 12:22:36 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.793 --rc genhtml_branch_coverage=1 00:08:07.793 --rc genhtml_function_coverage=1 00:08:07.793 --rc genhtml_legend=1 00:08:07.793 --rc geninfo_all_blocks=1 00:08:07.793 --rc geninfo_unexecuted_blocks=1 00:08:07.793 00:08:07.793 ' 00:08:07.793 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:07.793 12:22:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:07.793 12:22:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:07.793 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:07.793 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:07.794 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:07.794 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.794 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=518519 00:08:07.794 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:07.794 12:22:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 518519 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 518519 ']' 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.794 12:22:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.794 [2024-11-05 12:22:36.880629] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:07.794 [2024-11-05 12:22:36.880729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518519 ] 00:08:07.794 [2024-11-05 12:22:36.946002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:07.794 [2024-11-05 12:22:36.992894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.794 [2024-11-05 12:22:36.992899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.051 12:22:37 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.051 12:22:37 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:08:08.051 12:22:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=518532 00:08:08.051 12:22:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:08.051 12:22:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:08.309 [ 00:08:08.309 "bdev_malloc_delete", 00:08:08.309 "bdev_malloc_create", 00:08:08.309 "bdev_null_resize", 00:08:08.309 "bdev_null_delete", 00:08:08.309 "bdev_null_create", 00:08:08.309 "bdev_nvme_cuse_unregister", 00:08:08.309 "bdev_nvme_cuse_register", 00:08:08.309 "bdev_opal_new_user", 00:08:08.309 "bdev_opal_set_lock_state", 00:08:08.309 "bdev_opal_delete", 00:08:08.309 "bdev_opal_get_info", 00:08:08.309 "bdev_opal_create", 00:08:08.309 "bdev_nvme_opal_revert", 00:08:08.309 "bdev_nvme_opal_init", 00:08:08.309 "bdev_nvme_send_cmd", 00:08:08.309 "bdev_nvme_set_keys", 00:08:08.309 "bdev_nvme_get_path_iostat", 00:08:08.309 "bdev_nvme_get_mdns_discovery_info", 00:08:08.309 "bdev_nvme_stop_mdns_discovery", 00:08:08.309 "bdev_nvme_start_mdns_discovery", 00:08:08.309 "bdev_nvme_set_multipath_policy", 00:08:08.309 "bdev_nvme_set_preferred_path", 00:08:08.309 "bdev_nvme_get_io_paths", 00:08:08.309 "bdev_nvme_remove_error_injection", 00:08:08.309 "bdev_nvme_add_error_injection", 00:08:08.309 "bdev_nvme_get_discovery_info", 00:08:08.309 "bdev_nvme_stop_discovery", 00:08:08.309 "bdev_nvme_start_discovery", 00:08:08.309 "bdev_nvme_get_controller_health_info", 00:08:08.309 "bdev_nvme_disable_controller", 00:08:08.309 "bdev_nvme_enable_controller", 00:08:08.309 "bdev_nvme_reset_controller", 00:08:08.309 "bdev_nvme_get_transport_statistics", 00:08:08.309 "bdev_nvme_apply_firmware", 00:08:08.309 "bdev_nvme_detach_controller", 00:08:08.309 "bdev_nvme_get_controllers", 00:08:08.309 "bdev_nvme_attach_controller", 00:08:08.309 "bdev_nvme_set_hotplug", 00:08:08.309 "bdev_nvme_set_options", 00:08:08.309 "bdev_passthru_delete", 00:08:08.309 "bdev_passthru_create", 00:08:08.309 "bdev_lvol_set_parent_bdev", 00:08:08.309 "bdev_lvol_set_parent", 00:08:08.309 "bdev_lvol_check_shallow_copy", 00:08:08.309 "bdev_lvol_start_shallow_copy", 00:08:08.309 "bdev_lvol_grow_lvstore", 00:08:08.309 "bdev_lvol_get_lvols", 00:08:08.309 "bdev_lvol_get_lvstores", 00:08:08.309 "bdev_lvol_delete", 00:08:08.309 "bdev_lvol_set_read_only", 00:08:08.309 "bdev_lvol_resize", 00:08:08.309 "bdev_lvol_decouple_parent", 00:08:08.309 "bdev_lvol_inflate", 00:08:08.309 "bdev_lvol_rename", 00:08:08.309 "bdev_lvol_clone_bdev", 00:08:08.309 "bdev_lvol_clone", 00:08:08.309 "bdev_lvol_snapshot", 00:08:08.309 "bdev_lvol_create", 00:08:08.309 "bdev_lvol_delete_lvstore", 00:08:08.309 "bdev_lvol_rename_lvstore", 00:08:08.309 "bdev_lvol_create_lvstore", 00:08:08.309 "bdev_raid_set_options", 00:08:08.309 "bdev_raid_remove_base_bdev", 00:08:08.309 "bdev_raid_add_base_bdev", 00:08:08.309 "bdev_raid_delete", 00:08:08.309 "bdev_raid_create", 00:08:08.309 "bdev_raid_get_bdevs", 00:08:08.309 "bdev_error_inject_error", 00:08:08.309 "bdev_error_delete", 00:08:08.309 "bdev_error_create", 00:08:08.309 "bdev_split_delete", 00:08:08.309 "bdev_split_create", 00:08:08.309 "bdev_delay_delete", 00:08:08.309 "bdev_delay_create", 00:08:08.309 "bdev_delay_update_latency", 00:08:08.310 "bdev_zone_block_delete", 00:08:08.310 "bdev_zone_block_create", 00:08:08.310 "blobfs_create", 00:08:08.310 "blobfs_detect", 00:08:08.310 "blobfs_set_cache_size", 00:08:08.310 "bdev_aio_delete", 00:08:08.310 "bdev_aio_rescan", 00:08:08.310 "bdev_aio_create", 00:08:08.310 "bdev_ftl_set_property", 00:08:08.310 "bdev_ftl_get_properties", 00:08:08.310 "bdev_ftl_get_stats", 00:08:08.310 "bdev_ftl_unmap", 00:08:08.310 "bdev_ftl_unload", 00:08:08.310 "bdev_ftl_delete", 00:08:08.310 "bdev_ftl_load", 00:08:08.310 "bdev_ftl_create", 00:08:08.310 "bdev_virtio_attach_controller", 00:08:08.310 "bdev_virtio_scsi_get_devices", 00:08:08.310 "bdev_virtio_detach_controller", 00:08:08.310 "bdev_virtio_blk_set_hotplug", 00:08:08.310 "bdev_iscsi_delete", 00:08:08.310 "bdev_iscsi_create", 00:08:08.310 "bdev_iscsi_set_options", 00:08:08.310 "accel_error_inject_error", 00:08:08.310 "ioat_scan_accel_module", 00:08:08.310 "dsa_scan_accel_module", 00:08:08.310 "iaa_scan_accel_module", 00:08:08.310 "vfu_virtio_create_fs_endpoint", 00:08:08.310 "vfu_virtio_create_scsi_endpoint", 00:08:08.310 "vfu_virtio_scsi_remove_target", 00:08:08.310 "vfu_virtio_scsi_add_target", 00:08:08.310 "vfu_virtio_create_blk_endpoint", 00:08:08.310 "vfu_virtio_delete_endpoint", 00:08:08.310 "keyring_file_remove_key", 00:08:08.310 "keyring_file_add_key", 00:08:08.310 "keyring_linux_set_options", 00:08:08.310 "fsdev_aio_delete", 00:08:08.310 "fsdev_aio_create", 00:08:08.310 "iscsi_get_histogram", 00:08:08.310 "iscsi_enable_histogram", 00:08:08.310 "iscsi_set_options", 00:08:08.310 "iscsi_get_auth_groups", 00:08:08.310 "iscsi_auth_group_remove_secret", 00:08:08.310 "iscsi_auth_group_add_secret", 00:08:08.310 "iscsi_delete_auth_group", 00:08:08.310 "iscsi_create_auth_group", 00:08:08.310 "iscsi_set_discovery_auth", 00:08:08.310 "iscsi_get_options", 00:08:08.310 "iscsi_target_node_request_logout", 00:08:08.310 "iscsi_target_node_set_redirect", 00:08:08.310 "iscsi_target_node_set_auth", 00:08:08.310 "iscsi_target_node_add_lun", 00:08:08.310 "iscsi_get_stats", 00:08:08.310 "iscsi_get_connections", 00:08:08.310 "iscsi_portal_group_set_auth", 00:08:08.310 "iscsi_start_portal_group", 00:08:08.310 "iscsi_delete_portal_group", 00:08:08.310 "iscsi_create_portal_group", 00:08:08.310 "iscsi_get_portal_groups", 00:08:08.310 "iscsi_delete_target_node", 00:08:08.310 "iscsi_target_node_remove_pg_ig_maps", 00:08:08.310 "iscsi_target_node_add_pg_ig_maps", 00:08:08.310 "iscsi_create_target_node", 00:08:08.310 "iscsi_get_target_nodes", 00:08:08.310 "iscsi_delete_initiator_group", 00:08:08.310 "iscsi_initiator_group_remove_initiators", 00:08:08.310 "iscsi_initiator_group_add_initiators", 00:08:08.310 "iscsi_create_initiator_group", 00:08:08.310 "iscsi_get_initiator_groups", 00:08:08.310 "nvmf_set_crdt", 00:08:08.310 "nvmf_set_config", 00:08:08.310 "nvmf_set_max_subsystems", 00:08:08.310 "nvmf_stop_mdns_prr", 00:08:08.310 "nvmf_publish_mdns_prr", 00:08:08.310 "nvmf_subsystem_get_listeners", 00:08:08.310 "nvmf_subsystem_get_qpairs", 00:08:08.310 "nvmf_subsystem_get_controllers", 00:08:08.310 "nvmf_get_stats", 00:08:08.310 "nvmf_get_transports", 00:08:08.310 "nvmf_create_transport", 00:08:08.310 "nvmf_get_targets", 00:08:08.310 "nvmf_delete_target", 00:08:08.310 "nvmf_create_target", 00:08:08.310 "nvmf_subsystem_allow_any_host", 00:08:08.310 "nvmf_subsystem_set_keys", 00:08:08.310 "nvmf_subsystem_remove_host", 00:08:08.310 "nvmf_subsystem_add_host", 00:08:08.310 "nvmf_ns_remove_host", 00:08:08.310 "nvmf_ns_add_host", 00:08:08.310 "nvmf_subsystem_remove_ns", 00:08:08.310 "nvmf_subsystem_set_ns_ana_group", 00:08:08.310 "nvmf_subsystem_add_ns", 00:08:08.310 "nvmf_subsystem_listener_set_ana_state", 00:08:08.310 "nvmf_discovery_get_referrals", 00:08:08.310 "nvmf_discovery_remove_referral", 00:08:08.310 "nvmf_discovery_add_referral", 00:08:08.310 "nvmf_subsystem_remove_listener", 00:08:08.310 "nvmf_subsystem_add_listener", 00:08:08.310 "nvmf_delete_subsystem", 00:08:08.310 "nvmf_create_subsystem", 00:08:08.310 "nvmf_get_subsystems", 00:08:08.310 "env_dpdk_get_mem_stats", 00:08:08.310 "nbd_get_disks", 00:08:08.310 "nbd_stop_disk", 00:08:08.310 "nbd_start_disk", 00:08:08.310 "ublk_recover_disk", 00:08:08.310 "ublk_get_disks", 00:08:08.310 "ublk_stop_disk", 00:08:08.310 "ublk_start_disk", 00:08:08.310 "ublk_destroy_target", 00:08:08.310 "ublk_create_target", 00:08:08.310 "virtio_blk_create_transport", 00:08:08.310 "virtio_blk_get_transports", 00:08:08.310 "vhost_controller_set_coalescing", 00:08:08.310 "vhost_get_controllers", 00:08:08.310 "vhost_delete_controller", 00:08:08.310 "vhost_create_blk_controller", 00:08:08.310 "vhost_scsi_controller_remove_target", 00:08:08.310 "vhost_scsi_controller_add_target", 00:08:08.310 "vhost_start_scsi_controller", 00:08:08.310 "vhost_create_scsi_controller", 00:08:08.310 "thread_set_cpumask", 00:08:08.310 "scheduler_set_options", 00:08:08.310 "framework_get_governor", 00:08:08.310 "framework_get_scheduler", 00:08:08.310 "framework_set_scheduler", 00:08:08.310 "framework_get_reactors", 00:08:08.310 "thread_get_io_channels", 00:08:08.310 "thread_get_pollers", 00:08:08.310 "thread_get_stats", 00:08:08.310 "framework_monitor_context_switch", 00:08:08.310 "spdk_kill_instance", 00:08:08.310 "log_enable_timestamps", 00:08:08.310 "log_get_flags", 00:08:08.310 "log_clear_flag", 00:08:08.310 "log_set_flag", 00:08:08.310 "log_get_level", 00:08:08.310 "log_set_level", 00:08:08.310 "log_get_print_level", 00:08:08.310 "log_set_print_level", 00:08:08.310 "framework_enable_cpumask_locks", 00:08:08.310 "framework_disable_cpumask_locks", 00:08:08.310 "framework_wait_init", 00:08:08.310 "framework_start_init", 00:08:08.310 "scsi_get_devices", 00:08:08.310 "bdev_get_histogram", 00:08:08.310 "bdev_enable_histogram", 00:08:08.310 "bdev_set_qos_limit", 00:08:08.310 "bdev_set_qd_sampling_period", 00:08:08.310 "bdev_get_bdevs", 00:08:08.310 "bdev_reset_iostat", 00:08:08.310 "bdev_get_iostat", 00:08:08.310 "bdev_examine", 00:08:08.310 "bdev_wait_for_examine", 00:08:08.310 "bdev_set_options", 00:08:08.310 "accel_get_stats", 00:08:08.310 "accel_set_options", 00:08:08.310 "accel_set_driver", 00:08:08.310 "accel_crypto_key_destroy", 00:08:08.310 "accel_crypto_keys_get", 00:08:08.310 "accel_crypto_key_create", 00:08:08.310 "accel_assign_opc", 00:08:08.310 "accel_get_module_info", 00:08:08.310 "accel_get_opc_assignments", 00:08:08.310 "vmd_rescan", 00:08:08.310 "vmd_remove_device", 00:08:08.310 "vmd_enable", 00:08:08.310 "sock_get_default_impl", 00:08:08.310 "sock_set_default_impl", 00:08:08.310 "sock_impl_set_options", 00:08:08.310 "sock_impl_get_options", 00:08:08.310 "iobuf_get_stats", 00:08:08.310 "iobuf_set_options", 00:08:08.310 "keyring_get_keys", 00:08:08.310 "vfu_tgt_set_base_path", 00:08:08.310 "framework_get_pci_devices", 00:08:08.310 "framework_get_config", 00:08:08.310 "framework_get_subsystems", 00:08:08.310 "fsdev_set_opts", 00:08:08.310 "fsdev_get_opts", 00:08:08.310 "trace_get_info", 00:08:08.310 "trace_get_tpoint_group_mask", 00:08:08.310 "trace_disable_tpoint_group", 00:08:08.310 "trace_enable_tpoint_group", 00:08:08.310 "trace_clear_tpoint_mask", 00:08:08.310 "trace_set_tpoint_mask", 00:08:08.310 "notify_get_notifications", 00:08:08.310 "notify_get_types", 00:08:08.310 "spdk_get_version", 00:08:08.310 "rpc_get_methods" 00:08:08.310 ] 00:08:08.310 12:22:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.310 12:22:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:08.310 12:22:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 518519 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 518519 ']' 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 518519 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:08.310 12:22:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 518519 00:08:08.568 12:22:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:08.568 12:22:37 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:08.568 12:22:37 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 518519' 00:08:08.568 killing process with pid 518519 00:08:08.568 12:22:37 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 518519 00:08:08.568 12:22:37 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 518519 00:08:08.826 00:08:08.826 real 0m1.255s 00:08:08.826 user 0m2.293s 00:08:08.826 sys 0m0.437s 00:08:08.826 12:22:37 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.826 12:22:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:08.826 ************************************ 00:08:08.826 END TEST spdkcli_tcp 00:08:08.826 ************************************ 00:08:08.826 12:22:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:08.826 12:22:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.826 12:22:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.826 12:22:37 -- common/autotest_common.sh@10 -- # set +x 00:08:08.826 ************************************ 00:08:08.826 START TEST dpdk_mem_utility 00:08:08.826 ************************************ 00:08:08.826 12:22:37 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:08.826 * Looking for test storage... 00:08:08.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:08.826 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:08.826 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:08.826 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.085 12:22:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.085 --rc genhtml_branch_coverage=1 00:08:09.085 --rc genhtml_function_coverage=1 00:08:09.085 --rc genhtml_legend=1 00:08:09.085 --rc geninfo_all_blocks=1 00:08:09.085 --rc geninfo_unexecuted_blocks=1 00:08:09.085 00:08:09.085 ' 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.085 --rc genhtml_branch_coverage=1 00:08:09.085 --rc genhtml_function_coverage=1 00:08:09.085 --rc genhtml_legend=1 00:08:09.085 --rc geninfo_all_blocks=1 00:08:09.085 --rc geninfo_unexecuted_blocks=1 00:08:09.085 00:08:09.085 ' 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.085 --rc genhtml_branch_coverage=1 00:08:09.085 --rc genhtml_function_coverage=1 00:08:09.085 --rc genhtml_legend=1 00:08:09.085 --rc geninfo_all_blocks=1 00:08:09.085 --rc geninfo_unexecuted_blocks=1 00:08:09.085 00:08:09.085 ' 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.085 --rc genhtml_branch_coverage=1 00:08:09.085 --rc genhtml_function_coverage=1 00:08:09.085 --rc genhtml_legend=1 00:08:09.085 --rc geninfo_all_blocks=1 00:08:09.085 --rc geninfo_unexecuted_blocks=1 00:08:09.085 00:08:09.085 ' 00:08:09.085 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:09.085 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=518735 00:08:09.085 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:09.085 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 518735 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 518735 ']' 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.085 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:09.085 [2024-11-05 12:22:38.185755] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:09.085 [2024-11-05 12:22:38.185868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518735 ] 00:08:09.085 [2024-11-05 12:22:38.252241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.085 [2024-11-05 12:22:38.298818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.343 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.343 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:08:09.343 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:09.343 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:09.343 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.343 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:09.343 { 00:08:09.343 "filename": "/tmp/spdk_mem_dump.txt" 00:08:09.343 } 00:08:09.343 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.343 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:09.601 DPDK memory size 810.000000 MiB in 1 heap(s) 00:08:09.601 1 heaps totaling size 810.000000 MiB 00:08:09.601 size: 810.000000 MiB heap id: 0 00:08:09.601 end heaps---------- 00:08:09.601 9 mempools totaling size 595.772034 MiB 00:08:09.601 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:09.601 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:09.601 size: 92.545471 MiB name: bdev_io_518735 00:08:09.601 size: 50.003479 MiB name: msgpool_518735 00:08:09.601 size: 36.509338 MiB name: fsdev_io_518735 00:08:09.601 size: 21.763794 MiB name: PDU_Pool 00:08:09.601 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:09.601 size: 4.133484 MiB name: evtpool_518735 00:08:09.601 size: 0.026123 MiB name: Session_Pool 00:08:09.601 end mempools------- 00:08:09.601 6 memzones totaling size 4.142822 MiB 00:08:09.601 size: 1.000366 MiB name: RG_ring_0_518735 00:08:09.601 size: 1.000366 MiB name: RG_ring_1_518735 00:08:09.601 size: 1.000366 MiB name: RG_ring_4_518735 00:08:09.601 size: 1.000366 MiB name: RG_ring_5_518735 00:08:09.601 size: 0.125366 MiB name: RG_ring_2_518735 00:08:09.601 size: 0.015991 MiB name: RG_ring_3_518735 00:08:09.601 end memzones------- 00:08:09.601 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:09.601 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:09.601 list of free elements. size: 10.862488 MiB 00:08:09.601 element at address: 0x200018a00000 with size: 0.999878 MiB 00:08:09.601 element at address: 0x200018c00000 with size: 0.999878 MiB 00:08:09.601 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:09.601 element at address: 0x200031800000 with size: 0.994446 MiB 00:08:09.601 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:09.601 element at address: 0x200012c00000 with size: 0.954285 MiB 00:08:09.601 element at address: 0x200018e00000 with size: 0.936584 MiB 00:08:09.601 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:09.601 element at address: 0x20001a600000 with size: 0.582886 MiB 00:08:09.601 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:09.601 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:09.601 element at address: 0x200019000000 with size: 0.485657 MiB 00:08:09.601 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:09.601 element at address: 0x200027a00000 with size: 0.410034 MiB 00:08:09.601 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:09.601 list of standard malloc elements. size: 199.218628 MiB 00:08:09.601 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:09.601 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:09.601 element at address: 0x200018afff80 with size: 1.000122 MiB 00:08:09.601 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:08:09.601 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:09.601 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:09.601 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:08:09.601 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:09.601 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:08:09.601 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:09.601 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:09.601 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:09.601 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:09.601 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:09.601 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:09.601 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:09.601 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:09.601 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:09.601 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:09.601 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:09.601 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:09.601 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:09.601 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:09.602 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:09.602 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:08:09.602 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:08:09.602 element at address: 0x20001a695380 with size: 0.000183 MiB 00:08:09.602 element at address: 0x20001a695440 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200027a69040 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:08:09.602 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:08:09.602 list of memzone associated elements. size: 599.918884 MiB 00:08:09.602 element at address: 0x20001a695500 with size: 211.416748 MiB 00:08:09.602 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:09.602 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:08:09.602 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:09.602 element at address: 0x200012df4780 with size: 92.045044 MiB 00:08:09.602 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_518735_0 00:08:09.602 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:09.602 associated memzone info: size: 48.002930 MiB name: MP_msgpool_518735_0 00:08:09.602 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:09.602 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_518735_0 00:08:09.602 element at address: 0x2000191be940 with size: 20.255554 MiB 00:08:09.602 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:09.602 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:08:09.602 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:09.602 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:09.602 associated memzone info: size: 3.000122 MiB name: MP_evtpool_518735_0 00:08:09.602 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:09.602 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_518735 00:08:09.602 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:09.602 associated memzone info: size: 1.007996 MiB name: MP_evtpool_518735 00:08:09.602 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:09.602 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:09.602 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:08:09.602 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:09.602 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:09.602 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:09.602 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:09.602 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:09.602 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:09.602 associated memzone info: size: 1.000366 MiB name: RG_ring_0_518735 00:08:09.602 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:09.602 associated memzone info: size: 1.000366 MiB name: RG_ring_1_518735 00:08:09.602 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:08:09.602 associated memzone info: size: 1.000366 MiB name: RG_ring_4_518735 00:08:09.602 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:08:09.602 associated memzone info: size: 1.000366 MiB name: RG_ring_5_518735 00:08:09.602 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:09.602 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_518735 00:08:09.602 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:09.602 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_518735 00:08:09.602 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:09.602 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:09.602 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:09.602 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:09.602 element at address: 0x20001907c540 with size: 0.250488 MiB 00:08:09.602 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:09.602 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:09.602 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_518735 00:08:09.602 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:09.602 associated memzone info: size: 0.125366 MiB name: RG_ring_2_518735 00:08:09.602 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:09.602 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:09.602 element at address: 0x200027a69100 with size: 0.023743 MiB 00:08:09.602 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:09.602 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:09.602 associated memzone info: size: 0.015991 MiB name: RG_ring_3_518735 00:08:09.602 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:08:09.602 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:09.602 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:09.602 associated memzone info: size: 0.000183 MiB name: MP_msgpool_518735 00:08:09.602 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:09.602 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_518735 00:08:09.602 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:09.602 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_518735 00:08:09.602 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:08:09.602 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:09.602 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:09.602 12:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 518735 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 518735 ']' 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 518735 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 518735 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 518735' 00:08:09.602 killing process with pid 518735 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 518735 00:08:09.602 12:22:38 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 518735 00:08:09.860 00:08:09.860 real 0m1.094s 00:08:09.860 user 0m1.068s 00:08:09.860 sys 0m0.429s 00:08:09.860 12:22:39 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.860 12:22:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:09.860 ************************************ 00:08:09.860 END TEST dpdk_mem_utility 00:08:09.860 ************************************ 00:08:10.118 12:22:39 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:10.118 12:22:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:10.118 12:22:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.118 12:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:10.118 ************************************ 00:08:10.118 START TEST event 00:08:10.118 ************************************ 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:10.118 * Looking for test storage... 00:08:10.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:10.118 12:22:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.118 12:22:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.118 12:22:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.118 12:22:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.118 12:22:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.118 12:22:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.118 12:22:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.118 12:22:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.118 12:22:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.118 12:22:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.118 12:22:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.118 12:22:39 event -- scripts/common.sh@344 -- # case "$op" in 00:08:10.118 12:22:39 event -- scripts/common.sh@345 -- # : 1 00:08:10.118 12:22:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.118 12:22:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.118 12:22:39 event -- scripts/common.sh@365 -- # decimal 1 00:08:10.118 12:22:39 event -- scripts/common.sh@353 -- # local d=1 00:08:10.118 12:22:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.118 12:22:39 event -- scripts/common.sh@355 -- # echo 1 00:08:10.118 12:22:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.118 12:22:39 event -- scripts/common.sh@366 -- # decimal 2 00:08:10.118 12:22:39 event -- scripts/common.sh@353 -- # local d=2 00:08:10.118 12:22:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.118 12:22:39 event -- scripts/common.sh@355 -- # echo 2 00:08:10.118 12:22:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.118 12:22:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.118 12:22:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.118 12:22:39 event -- scripts/common.sh@368 -- # return 0 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.118 --rc genhtml_branch_coverage=1 00:08:10.118 --rc genhtml_function_coverage=1 00:08:10.118 --rc genhtml_legend=1 00:08:10.118 --rc geninfo_all_blocks=1 00:08:10.118 --rc geninfo_unexecuted_blocks=1 00:08:10.118 00:08:10.118 ' 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.118 --rc genhtml_branch_coverage=1 00:08:10.118 --rc genhtml_function_coverage=1 00:08:10.118 --rc genhtml_legend=1 00:08:10.118 --rc geninfo_all_blocks=1 00:08:10.118 --rc geninfo_unexecuted_blocks=1 00:08:10.118 00:08:10.118 ' 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.118 --rc genhtml_branch_coverage=1 00:08:10.118 --rc genhtml_function_coverage=1 00:08:10.118 --rc genhtml_legend=1 00:08:10.118 --rc geninfo_all_blocks=1 00:08:10.118 --rc geninfo_unexecuted_blocks=1 00:08:10.118 00:08:10.118 ' 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:10.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.118 --rc genhtml_branch_coverage=1 00:08:10.118 --rc genhtml_function_coverage=1 00:08:10.118 --rc genhtml_legend=1 00:08:10.118 --rc geninfo_all_blocks=1 00:08:10.118 --rc geninfo_unexecuted_blocks=1 00:08:10.118 00:08:10.118 ' 00:08:10.118 12:22:39 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:10.118 12:22:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:10.118 12:22:39 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:10.118 12:22:39 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.118 12:22:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:10.118 ************************************ 00:08:10.118 START TEST event_perf 00:08:10.118 ************************************ 00:08:10.118 12:22:39 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.118 Running I/O for 1 seconds...[2024-11-05 12:22:39.318724] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:10.118 [2024-11-05 12:22:39.318785] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518934 ] 00:08:10.376 [2024-11-05 12:22:39.384197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.376 [2024-11-05 12:22:39.432500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.376 [2024-11-05 12:22:39.432611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.376 [2024-11-05 12:22:39.432737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.376 [2024-11-05 12:22:39.432745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.309 Running I/O for 1 seconds... 00:08:11.309 lcore 0: 220567 00:08:11.309 lcore 1: 220566 00:08:11.309 lcore 2: 220565 00:08:11.309 lcore 3: 220566 00:08:11.309 done. 00:08:11.309 00:08:11.309 real 0m1.174s 00:08:11.309 user 0m4.094s 00:08:11.309 sys 0m0.075s 00:08:11.309 12:22:40 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.309 12:22:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 ************************************ 00:08:11.309 END TEST event_perf 00:08:11.309 ************************************ 00:08:11.309 12:22:40 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:11.309 12:22:40 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:11.309 12:22:40 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.309 12:22:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:11.309 ************************************ 00:08:11.309 START TEST event_reactor 00:08:11.309 ************************************ 00:08:11.309 12:22:40 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:11.309 [2024-11-05 12:22:40.544193] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:11.309 [2024-11-05 12:22:40.544258] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519090 ] 00:08:11.567 [2024-11-05 12:22:40.612328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.567 [2024-11-05 12:22:40.655068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.499 test_start 00:08:12.499 oneshot 00:08:12.499 tick 100 00:08:12.499 tick 100 00:08:12.499 tick 250 00:08:12.499 tick 100 00:08:12.499 tick 100 00:08:12.499 tick 100 00:08:12.499 tick 250 00:08:12.499 tick 500 00:08:12.499 tick 100 00:08:12.499 tick 100 00:08:12.499 tick 250 00:08:12.499 tick 100 00:08:12.499 tick 100 00:08:12.499 test_end 00:08:12.499 00:08:12.499 real 0m1.169s 00:08:12.499 user 0m1.103s 00:08:12.499 sys 0m0.061s 00:08:12.499 12:22:41 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.499 12:22:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:12.499 ************************************ 00:08:12.499 END TEST event_reactor 00:08:12.499 ************************************ 00:08:12.500 12:22:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:12.500 12:22:41 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:12.500 12:22:41 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.500 12:22:41 event -- common/autotest_common.sh@10 -- # set +x 00:08:12.758 ************************************ 00:08:12.758 START TEST event_reactor_perf 00:08:12.758 ************************************ 00:08:12.758 12:22:41 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:12.758 [2024-11-05 12:22:41.766575] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:12.758 [2024-11-05 12:22:41.766644] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519242 ] 00:08:12.758 [2024-11-05 12:22:41.829900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.758 [2024-11-05 12:22:41.874459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.691 test_start 00:08:13.691 test_end 00:08:13.691 Performance: 450102 events per second 00:08:13.691 00:08:13.691 real 0m1.166s 00:08:13.691 user 0m1.100s 00:08:13.691 sys 0m0.061s 00:08:13.691 12:22:42 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.691 12:22:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:13.691 ************************************ 00:08:13.691 END TEST event_reactor_perf 00:08:13.691 ************************************ 00:08:13.950 12:22:42 event -- event/event.sh@49 -- # uname -s 00:08:13.950 12:22:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:13.950 12:22:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:13.950 12:22:42 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.950 12:22:42 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.950 12:22:42 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.950 ************************************ 00:08:13.950 START TEST event_scheduler 00:08:13.950 ************************************ 00:08:13.950 12:22:42 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:13.950 * Looking for test storage... 00:08:13.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.950 12:22:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.950 --rc genhtml_branch_coverage=1 00:08:13.950 --rc genhtml_function_coverage=1 00:08:13.950 --rc genhtml_legend=1 00:08:13.950 --rc geninfo_all_blocks=1 00:08:13.950 --rc geninfo_unexecuted_blocks=1 00:08:13.950 00:08:13.950 ' 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.950 --rc genhtml_branch_coverage=1 00:08:13.950 --rc genhtml_function_coverage=1 00:08:13.950 --rc genhtml_legend=1 00:08:13.950 --rc geninfo_all_blocks=1 00:08:13.950 --rc geninfo_unexecuted_blocks=1 00:08:13.950 00:08:13.950 ' 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.950 --rc genhtml_branch_coverage=1 00:08:13.950 --rc genhtml_function_coverage=1 00:08:13.950 --rc genhtml_legend=1 00:08:13.950 --rc geninfo_all_blocks=1 00:08:13.950 --rc geninfo_unexecuted_blocks=1 00:08:13.950 00:08:13.950 ' 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.950 --rc genhtml_branch_coverage=1 00:08:13.950 --rc genhtml_function_coverage=1 00:08:13.950 --rc genhtml_legend=1 00:08:13.950 --rc geninfo_all_blocks=1 00:08:13.950 --rc geninfo_unexecuted_blocks=1 00:08:13.950 00:08:13.950 ' 00:08:13.950 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:13.950 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=519436 00:08:13.950 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:13.950 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:13.950 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 519436 00:08:13.950 12:22:43 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 519436 ']' 00:08:13.951 12:22:43 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.951 12:22:43 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.951 12:22:43 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.951 12:22:43 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.951 12:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:13.951 [2024-11-05 12:22:43.166025] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:13.951 [2024-11-05 12:22:43.166126] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519436 ] 00:08:14.209 [2024-11-05 12:22:43.237166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.209 [2024-11-05 12:22:43.286979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.209 [2024-11-05 12:22:43.287037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.209 [2024-11-05 12:22:43.287100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.209 [2024-11-05 12:22:43.287103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:08:14.209 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:14.209 [2024-11-05 12:22:43.404037] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:14.209 [2024-11-05 12:22:43.404065] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:14.209 [2024-11-05 12:22:43.404082] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:14.209 [2024-11-05 12:22:43.404094] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:14.209 [2024-11-05 12:22:43.404105] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.209 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.209 12:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 [2024-11-05 12:22:43.501215] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:14.468 12:22:43 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:14.468 12:22:43 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.468 12:22:43 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 ************************************ 00:08:14.468 START TEST scheduler_create_thread 00:08:14.468 ************************************ 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 2 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 3 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 4 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 5 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 6 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 7 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 8 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 9 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 10 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.468 12:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.034 12:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.034 00:08:15.034 real 0m0.589s 00:08:15.034 user 0m0.009s 00:08:15.034 sys 0m0.003s 00:08:15.034 12:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.034 12:22:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.034 ************************************ 00:08:15.034 END TEST scheduler_create_thread 00:08:15.034 ************************************ 00:08:15.034 12:22:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:15.034 12:22:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 519436 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 519436 ']' 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 519436 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 519436 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 519436' 00:08:15.034 killing process with pid 519436 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 519436 00:08:15.034 12:22:44 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 519436 00:08:15.600 [2024-11-05 12:22:44.601283] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:15.600 00:08:15.600 real 0m1.810s 00:08:15.600 user 0m2.474s 00:08:15.600 sys 0m0.356s 00:08:15.601 12:22:44 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.601 12:22:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:15.601 ************************************ 00:08:15.601 END TEST event_scheduler 00:08:15.601 ************************************ 00:08:15.601 12:22:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:15.601 12:22:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:15.601 12:22:44 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:15.601 12:22:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.601 12:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.601 ************************************ 00:08:15.601 START TEST app_repeat 00:08:15.601 ************************************ 00:08:15.601 12:22:44 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:08:15.601 12:22:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.601 12:22:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.601 12:22:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:15.601 12:22:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=519744 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 519744' 00:08:15.859 Process app_repeat pid: 519744 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:15.859 spdk_app_start Round 0 00:08:15.859 12:22:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 519744 /var/tmp/spdk-nbd.sock 00:08:15.859 12:22:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 519744 ']' 00:08:15.859 12:22:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:15.859 12:22:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.859 12:22:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:15.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:15.859 12:22:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.859 12:22:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:15.859 [2024-11-05 12:22:44.867257] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:15.859 [2024-11-05 12:22:44.867324] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519744 ] 00:08:15.859 [2024-11-05 12:22:44.931279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.859 [2024-11-05 12:22:44.975029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.859 [2024-11-05 12:22:44.975033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.117 12:22:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.117 12:22:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:16.117 12:22:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:16.374 Malloc0 00:08:16.374 12:22:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:16.631 Malloc1 00:08:16.631 12:22:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.631 12:22:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:16.888 /dev/nbd0 00:08:16.888 12:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:16.888 12:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.888 1+0 records in 00:08:16.888 1+0 records out 00:08:16.888 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157423 s, 26.0 MB/s 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:16.888 12:22:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:16.888 12:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.888 12:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.888 12:22:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:17.145 /dev/nbd1 00:08:17.145 12:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:17.145 12:22:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:17.145 12:22:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:17.146 12:22:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:17.146 1+0 records in 00:08:17.146 1+0 records out 00:08:17.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162912 s, 25.1 MB/s 00:08:17.146 12:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:17.146 12:22:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:17.146 12:22:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:17.146 12:22:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:17.146 12:22:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:17.146 12:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.146 12:22:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.146 12:22:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:17.146 12:22:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.146 12:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:17.710 { 00:08:17.710 "nbd_device": "/dev/nbd0", 00:08:17.710 "bdev_name": "Malloc0" 00:08:17.710 }, 00:08:17.710 { 00:08:17.710 "nbd_device": "/dev/nbd1", 00:08:17.710 "bdev_name": "Malloc1" 00:08:17.710 } 00:08:17.710 ]' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:17.710 { 00:08:17.710 "nbd_device": "/dev/nbd0", 00:08:17.710 "bdev_name": "Malloc0" 00:08:17.710 }, 00:08:17.710 { 00:08:17.710 "nbd_device": "/dev/nbd1", 00:08:17.710 "bdev_name": "Malloc1" 00:08:17.710 } 00:08:17.710 ]' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:17.710 /dev/nbd1' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:17.710 /dev/nbd1' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:17.710 256+0 records in 00:08:17.710 256+0 records out 00:08:17.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511844 s, 205 MB/s 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:17.710 256+0 records in 00:08:17.710 256+0 records out 00:08:17.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194568 s, 53.9 MB/s 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:17.710 256+0 records in 00:08:17.710 256+0 records out 00:08:17.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216881 s, 48.3 MB/s 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.710 12:22:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.968 12:22:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.225 12:22:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:18.483 12:22:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:18.483 12:22:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:18.741 12:22:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:18.999 [2024-11-05 12:22:48.139919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:18.999 [2024-11-05 12:22:48.183572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.999 [2024-11-05 12:22:48.183572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.256 [2024-11-05 12:22:48.241931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:19.256 [2024-11-05 12:22:48.241999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:21.781 12:22:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:21.781 12:22:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:21.781 spdk_app_start Round 1 00:08:21.781 12:22:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 519744 /var/tmp/spdk-nbd.sock 00:08:21.781 12:22:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 519744 ']' 00:08:21.781 12:22:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.781 12:22:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.781 12:22:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.782 12:22:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.782 12:22:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:22.039 12:22:51 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.039 12:22:51 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:22.039 12:22:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.296 Malloc0 00:08:22.296 12:22:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.554 Malloc1 00:08:22.555 12:22:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.555 12:22:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:23.120 /dev/nbd0 00:08:23.120 12:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:23.120 12:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.120 1+0 records in 00:08:23.120 1+0 records out 00:08:23.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184342 s, 22.2 MB/s 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:23.120 12:22:52 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:23.120 12:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.120 12:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.120 12:22:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:23.378 /dev/nbd1 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.378 1+0 records in 00:08:23.378 1+0 records out 00:08:23.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188904 s, 21.7 MB/s 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:23.378 12:22:52 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.378 12:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.636 { 00:08:23.636 "nbd_device": "/dev/nbd0", 00:08:23.636 "bdev_name": "Malloc0" 00:08:23.636 }, 00:08:23.636 { 00:08:23.636 "nbd_device": "/dev/nbd1", 00:08:23.636 "bdev_name": "Malloc1" 00:08:23.636 } 00:08:23.636 ]' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.636 { 00:08:23.636 "nbd_device": "/dev/nbd0", 00:08:23.636 "bdev_name": "Malloc0" 00:08:23.636 }, 00:08:23.636 { 00:08:23.636 "nbd_device": "/dev/nbd1", 00:08:23.636 "bdev_name": "Malloc1" 00:08:23.636 } 00:08:23.636 ]' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.636 /dev/nbd1' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.636 /dev/nbd1' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:23.636 256+0 records in 00:08:23.636 256+0 records out 00:08:23.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508541 s, 206 MB/s 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:23.636 256+0 records in 00:08:23.636 256+0 records out 00:08:23.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190627 s, 55.0 MB/s 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:23.636 256+0 records in 00:08:23.636 256+0 records out 00:08:23.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215543 s, 48.6 MB/s 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.636 12:22:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.202 12:22:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.459 12:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:24.717 12:22:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:24.717 12:22:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:24.975 12:22:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:25.233 [2024-11-05 12:22:54.232789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.233 [2024-11-05 12:22:54.276446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.233 [2024-11-05 12:22:54.276446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.233 [2024-11-05 12:22:54.335443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:25.233 [2024-11-05 12:22:54.335525] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:28.512 12:22:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:28.512 12:22:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:28.512 spdk_app_start Round 2 00:08:28.513 12:22:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 519744 /var/tmp/spdk-nbd.sock 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 519744 ']' 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:28.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.513 12:22:57 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:28.513 12:22:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.513 Malloc0 00:08:28.513 12:22:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.771 Malloc1 00:08:28.771 12:22:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:28.771 12:22:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:29.028 /dev/nbd0 00:08:29.028 12:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:29.028 12:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.028 1+0 records in 00:08:29.028 1+0 records out 00:08:29.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000163436 s, 25.1 MB/s 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:29.028 12:22:58 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:29.028 12:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.028 12:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.029 12:22:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:29.594 /dev/nbd1 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.594 1+0 records in 00:08:29.594 1+0 records out 00:08:29.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021475 s, 19.1 MB/s 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:29.594 12:22:58 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.594 12:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:29.853 { 00:08:29.853 "nbd_device": "/dev/nbd0", 00:08:29.853 "bdev_name": "Malloc0" 00:08:29.853 }, 00:08:29.853 { 00:08:29.853 "nbd_device": "/dev/nbd1", 00:08:29.853 "bdev_name": "Malloc1" 00:08:29.853 } 00:08:29.853 ]' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:29.853 { 00:08:29.853 "nbd_device": "/dev/nbd0", 00:08:29.853 "bdev_name": "Malloc0" 00:08:29.853 }, 00:08:29.853 { 00:08:29.853 "nbd_device": "/dev/nbd1", 00:08:29.853 "bdev_name": "Malloc1" 00:08:29.853 } 00:08:29.853 ]' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:29.853 /dev/nbd1' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:29.853 /dev/nbd1' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:29.853 256+0 records in 00:08:29.853 256+0 records out 00:08:29.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516766 s, 203 MB/s 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.853 256+0 records in 00:08:29.853 256+0 records out 00:08:29.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198389 s, 52.9 MB/s 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.853 256+0 records in 00:08:29.853 256+0 records out 00:08:29.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217747 s, 48.2 MB/s 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.853 12:22:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.853 12:22:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.111 12:22:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.369 12:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.626 12:22:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:30.626 12:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:30.626 12:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:30.884 12:22:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:30.884 12:22:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:31.143 12:23:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:31.401 [2024-11-05 12:23:00.391752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.401 [2024-11-05 12:23:00.439520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.401 [2024-11-05 12:23:00.439524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.401 [2024-11-05 12:23:00.496290] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:31.401 [2024-11-05 12:23:00.496360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:34.680 12:23:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 519744 /var/tmp/spdk-nbd.sock 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 519744 ']' 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:34.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:34.680 12:23:03 event.app_repeat -- event/event.sh@39 -- # killprocess 519744 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 519744 ']' 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 519744 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 519744 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 519744' 00:08:34.680 killing process with pid 519744 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@971 -- # kill 519744 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@976 -- # wait 519744 00:08:34.680 spdk_app_start is called in Round 0. 00:08:34.680 Shutdown signal received, stop current app iteration 00:08:34.680 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 reinitialization... 00:08:34.680 spdk_app_start is called in Round 1. 00:08:34.680 Shutdown signal received, stop current app iteration 00:08:34.680 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 reinitialization... 00:08:34.680 spdk_app_start is called in Round 2. 00:08:34.680 Shutdown signal received, stop current app iteration 00:08:34.680 Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 reinitialization... 00:08:34.680 spdk_app_start is called in Round 3. 00:08:34.680 Shutdown signal received, stop current app iteration 00:08:34.680 12:23:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:34.680 12:23:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:34.680 00:08:34.680 real 0m18.834s 00:08:34.680 user 0m41.768s 00:08:34.680 sys 0m3.223s 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.680 12:23:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 ************************************ 00:08:34.680 END TEST app_repeat 00:08:34.680 ************************************ 00:08:34.680 12:23:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:34.680 12:23:03 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:34.680 12:23:03 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.680 12:23:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.680 12:23:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 ************************************ 00:08:34.680 START TEST cpu_locks 00:08:34.680 ************************************ 00:08:34.680 12:23:03 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:34.680 * Looking for test storage... 00:08:34.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:34.680 12:23:03 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:34.680 12:23:03 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:34.680 12:23:03 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:34.680 12:23:03 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:34.680 12:23:03 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.680 12:23:03 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.681 12:23:03 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:34.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.681 --rc genhtml_branch_coverage=1 00:08:34.681 --rc genhtml_function_coverage=1 00:08:34.681 --rc genhtml_legend=1 00:08:34.681 --rc geninfo_all_blocks=1 00:08:34.681 --rc geninfo_unexecuted_blocks=1 00:08:34.681 00:08:34.681 ' 00:08:34.681 12:23:03 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:34.681 12:23:03 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:34.681 12:23:03 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:34.681 12:23:03 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.681 12:23:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.681 ************************************ 00:08:34.681 START TEST default_locks 00:08:34.681 ************************************ 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=522236 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 522236 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 522236 ']' 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.681 12:23:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.939 [2024-11-05 12:23:03.959979] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:34.939 [2024-11-05 12:23:03.960075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522236 ] 00:08:34.939 [2024-11-05 12:23:04.027491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.939 [2024-11-05 12:23:04.072775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.197 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:35.197 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:35.197 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 522236 00:08:35.197 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 522236 00:08:35.197 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:35.455 lslocks: write error 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 522236 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 522236 ']' 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 522236 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522236 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522236' 00:08:35.455 killing process with pid 522236 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 522236 00:08:35.455 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 522236 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 522236 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 522236 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 522236 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 522236 ']' 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (522236) - No such process 00:08:36.020 ERROR: process (pid: 522236) is no longer running 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:36.020 00:08:36.020 real 0m1.063s 00:08:36.020 user 0m1.013s 00:08:36.020 sys 0m0.492s 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.020 12:23:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.020 ************************************ 00:08:36.020 END TEST default_locks 00:08:36.020 ************************************ 00:08:36.021 12:23:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:36.021 12:23:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:36.021 12:23:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.021 12:23:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:36.021 ************************************ 00:08:36.021 START TEST default_locks_via_rpc 00:08:36.021 ************************************ 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=522369 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 522369 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 522369 ']' 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.021 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.021 [2024-11-05 12:23:05.075265] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:36.021 [2024-11-05 12:23:05.075359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522369 ] 00:08:36.021 [2024-11-05 12:23:05.141001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.021 [2024-11-05 12:23:05.188126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 522369 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 522369 00:08:36.279 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 522369 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 522369 ']' 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 522369 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522369 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522369' 00:08:36.537 killing process with pid 522369 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 522369 00:08:36.537 12:23:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 522369 00:08:37.100 00:08:37.100 real 0m1.061s 00:08:37.100 user 0m1.031s 00:08:37.100 sys 0m0.497s 00:08:37.100 12:23:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:37.100 12:23:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 ************************************ 00:08:37.100 END TEST default_locks_via_rpc 00:08:37.100 ************************************ 00:08:37.100 12:23:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:37.100 12:23:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:37.100 12:23:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:37.100 12:23:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 ************************************ 00:08:37.100 START TEST non_locking_app_on_locked_coremask 00:08:37.100 ************************************ 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=522504 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 522504 /var/tmp/spdk.sock 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 522504 ']' 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.100 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.100 [2024-11-05 12:23:06.186508] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:37.100 [2024-11-05 12:23:06.186609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522504 ] 00:08:37.100 [2024-11-05 12:23:06.252473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.100 [2024-11-05 12:23:06.298821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=522575 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 522575 /var/tmp/spdk2.sock 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 522575 ']' 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:37.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.357 12:23:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.615 [2024-11-05 12:23:06.612301] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:37.615 [2024-11-05 12:23:06.612394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522575 ] 00:08:37.615 [2024-11-05 12:23:06.712808] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:37.615 [2024-11-05 12:23:06.712850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.615 [2024-11-05 12:23:06.801010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.181 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:38.181 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:38.181 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 522504 00:08:38.181 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 522504 00:08:38.181 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:38.747 lslocks: write error 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 522504 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 522504 ']' 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 522504 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522504 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522504' 00:08:38.747 killing process with pid 522504 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 522504 00:08:38.747 12:23:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 522504 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 522575 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 522575 ']' 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 522575 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522575 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522575' 00:08:39.312 killing process with pid 522575 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 522575 00:08:39.312 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 522575 00:08:39.878 00:08:39.878 real 0m2.749s 00:08:39.878 user 0m2.784s 00:08:39.878 sys 0m0.948s 00:08:39.878 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.878 12:23:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 ************************************ 00:08:39.878 END TEST non_locking_app_on_locked_coremask 00:08:39.878 ************************************ 00:08:39.878 12:23:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:39.878 12:23:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.878 12:23:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.878 12:23:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 ************************************ 00:08:39.878 START TEST locking_app_on_unlocked_coremask 00:08:39.878 ************************************ 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=522871 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 522871 /var/tmp/spdk.sock 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 522871 ']' 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.878 12:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 [2024-11-05 12:23:08.984513] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:39.878 [2024-11-05 12:23:08.984598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522871 ] 00:08:39.878 [2024-11-05 12:23:09.049471] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:39.878 [2024-11-05 12:23:09.049503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.878 [2024-11-05 12:23:09.091283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=522876 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 522876 /var/tmp/spdk2.sock 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 522876 ']' 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.156 12:23:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.156 [2024-11-05 12:23:09.392636] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:40.156 [2024-11-05 12:23:09.392717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522876 ] 00:08:40.414 [2024-11-05 12:23:09.498589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.414 [2024-11-05 12:23:09.595120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.979 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.979 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:40.980 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 522876 00:08:40.980 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 522876 00:08:40.980 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:41.563 lslocks: write error 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 522871 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 522871 ']' 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 522871 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522871 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522871' 00:08:41.563 killing process with pid 522871 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 522871 00:08:41.563 12:23:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 522871 00:08:42.173 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 522876 00:08:42.173 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 522876 ']' 00:08:42.173 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 522876 00:08:42.173 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:42.173 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.173 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 522876 00:08:42.453 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.453 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.453 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 522876' 00:08:42.453 killing process with pid 522876 00:08:42.453 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 522876 00:08:42.453 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 522876 00:08:42.711 00:08:42.711 real 0m2.871s 00:08:42.711 user 0m2.890s 00:08:42.711 sys 0m1.038s 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:42.711 ************************************ 00:08:42.711 END TEST locking_app_on_unlocked_coremask 00:08:42.711 ************************************ 00:08:42.711 12:23:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:42.711 12:23:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:42.711 12:23:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.711 12:23:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.711 ************************************ 00:08:42.711 START TEST locking_app_on_locked_coremask 00:08:42.711 ************************************ 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=523310 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 523310 /var/tmp/spdk.sock 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 523310 ']' 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.711 12:23:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:42.711 [2024-11-05 12:23:11.903390] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:42.711 [2024-11-05 12:23:11.903501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523310 ] 00:08:42.969 [2024-11-05 12:23:11.970285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.969 [2024-11-05 12:23:12.019439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=523315 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 523315 /var/tmp/spdk2.sock 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 523315 /var/tmp/spdk2.sock 00:08:43.227 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 523315 /var/tmp/spdk2.sock 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 523315 ']' 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:43.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:43.228 12:23:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:43.228 [2024-11-05 12:23:12.332258] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:43.228 [2024-11-05 12:23:12.332340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523315 ] 00:08:43.228 [2024-11-05 12:23:12.431219] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 523310 has claimed it. 00:08:43.228 [2024-11-05 12:23:12.431279] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:44.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (523315) - No such process 00:08:44.160 ERROR: process (pid: 523315) is no longer running 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 523310 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 523310 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:44.160 lslocks: write error 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 523310 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 523310 ']' 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 523310 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:44.160 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 523310 00:08:44.418 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:44.418 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:44.418 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 523310' 00:08:44.418 killing process with pid 523310 00:08:44.418 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 523310 00:08:44.418 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 523310 00:08:44.676 00:08:44.676 real 0m1.969s 00:08:44.676 user 0m2.193s 00:08:44.676 sys 0m0.637s 00:08:44.676 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.676 12:23:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.676 ************************************ 00:08:44.676 END TEST locking_app_on_locked_coremask 00:08:44.676 ************************************ 00:08:44.676 12:23:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:44.676 12:23:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.676 12:23:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.676 12:23:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:44.676 ************************************ 00:08:44.676 START TEST locking_overlapped_coremask 00:08:44.676 ************************************ 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=523483 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 523483 /var/tmp/spdk.sock 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 523483 ']' 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.676 12:23:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.935 [2024-11-05 12:23:13.920312] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:44.935 [2024-11-05 12:23:13.920391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523483 ] 00:08:44.935 [2024-11-05 12:23:13.990253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.935 [2024-11-05 12:23:14.041694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.935 [2024-11-05 12:23:14.041759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.935 [2024-11-05 12:23:14.041761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=523613 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 523613 /var/tmp/spdk2.sock 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 523613 /var/tmp/spdk2.sock 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 523613 /var/tmp/spdk2.sock 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 523613 ']' 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:45.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:45.193 12:23:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:45.193 [2024-11-05 12:23:14.363316] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:45.193 [2024-11-05 12:23:14.363397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523613 ] 00:08:45.451 [2024-11-05 12:23:14.469790] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 523483 has claimed it. 00:08:45.451 [2024-11-05 12:23:14.469867] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:46.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (523613) - No such process 00:08:46.016 ERROR: process (pid: 523613) is no longer running 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 523483 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 523483 ']' 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 523483 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 523483 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 523483' 00:08:46.016 killing process with pid 523483 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 523483 00:08:46.016 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 523483 00:08:46.276 00:08:46.276 real 0m1.629s 00:08:46.276 user 0m4.563s 00:08:46.276 sys 0m0.470s 00:08:46.276 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.276 12:23:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.276 ************************************ 00:08:46.276 END TEST locking_overlapped_coremask 00:08:46.276 ************************************ 00:08:46.536 12:23:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:46.536 12:23:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:46.536 12:23:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.536 12:23:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:46.536 ************************************ 00:08:46.536 START TEST locking_overlapped_coremask_via_rpc 00:08:46.536 ************************************ 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=523777 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 523777 /var/tmp/spdk.sock 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 523777 ']' 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.536 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.536 [2024-11-05 12:23:15.603183] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:46.536 [2024-11-05 12:23:15.603267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523777 ] 00:08:46.536 [2024-11-05 12:23:15.668326] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:46.536 [2024-11-05 12:23:15.668356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.536 [2024-11-05 12:23:15.713676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.536 [2024-11-05 12:23:15.713734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.536 [2024-11-05 12:23:15.713738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=523788 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 523788 /var/tmp/spdk2.sock 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 523788 ']' 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:46.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.794 12:23:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.794 [2024-11-05 12:23:16.027819] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:46.794 [2024-11-05 12:23:16.027915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523788 ] 00:08:47.052 [2024-11-05 12:23:16.132941] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:47.052 [2024-11-05 12:23:16.132984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:47.052 [2024-11-05 12:23:16.236081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.052 [2024-11-05 12:23:16.236116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:47.052 [2024-11-05 12:23:16.236118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.985 [2024-11-05 12:23:17.015962] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 523777 has claimed it. 00:08:47.985 request: 00:08:47.985 { 00:08:47.985 "method": "framework_enable_cpumask_locks", 00:08:47.985 "req_id": 1 00:08:47.985 } 00:08:47.985 Got JSON-RPC error response 00:08:47.985 response: 00:08:47.985 { 00:08:47.985 "code": -32603, 00:08:47.985 "message": "Failed to claim CPU core: 2" 00:08:47.985 } 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 523777 /var/tmp/spdk.sock 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 523777 ']' 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:47.985 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 523788 /var/tmp/spdk2.sock 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 523788 ']' 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:48.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:48.243 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:48.501 00:08:48.501 real 0m2.026s 00:08:48.501 user 0m1.137s 00:08:48.501 sys 0m0.175s 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:48.501 12:23:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.501 ************************************ 00:08:48.501 END TEST locking_overlapped_coremask_via_rpc 00:08:48.501 ************************************ 00:08:48.501 12:23:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:48.501 12:23:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 523777 ]] 00:08:48.501 12:23:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 523777 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 523777 ']' 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 523777 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 523777 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 523777' 00:08:48.501 killing process with pid 523777 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 523777 00:08:48.501 12:23:17 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 523777 00:08:49.067 12:23:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 523788 ]] 00:08:49.067 12:23:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 523788 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 523788 ']' 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 523788 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 523788 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 523788' 00:08:49.067 killing process with pid 523788 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 523788 00:08:49.067 12:23:18 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 523788 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 523777 ]] 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 523777 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 523777 ']' 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 523777 00:08:49.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (523777) - No such process 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 523777 is not found' 00:08:49.325 Process with pid 523777 is not found 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 523788 ]] 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 523788 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 523788 ']' 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 523788 00:08:49.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (523788) - No such process 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 523788 is not found' 00:08:49.325 Process with pid 523788 is not found 00:08:49.325 12:23:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:49.325 00:08:49.325 real 0m14.743s 00:08:49.325 user 0m27.181s 00:08:49.325 sys 0m5.177s 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.325 12:23:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 ************************************ 00:08:49.325 END TEST cpu_locks 00:08:49.325 ************************************ 00:08:49.325 00:08:49.325 real 0m39.364s 00:08:49.325 user 1m17.942s 00:08:49.325 sys 0m9.223s 00:08:49.325 12:23:18 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.325 12:23:18 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 ************************************ 00:08:49.325 END TEST event 00:08:49.325 ************************************ 00:08:49.325 12:23:18 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:49.325 12:23:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:49.325 12:23:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.325 12:23:18 -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 ************************************ 00:08:49.325 START TEST thread 00:08:49.325 ************************************ 00:08:49.325 12:23:18 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:49.584 * Looking for test storage... 00:08:49.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.584 12:23:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.584 12:23:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.584 12:23:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.584 12:23:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.584 12:23:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.584 12:23:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.584 12:23:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.584 12:23:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.584 12:23:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.584 12:23:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.584 12:23:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.584 12:23:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:49.584 12:23:18 thread -- scripts/common.sh@345 -- # : 1 00:08:49.584 12:23:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.584 12:23:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.584 12:23:18 thread -- scripts/common.sh@365 -- # decimal 1 00:08:49.584 12:23:18 thread -- scripts/common.sh@353 -- # local d=1 00:08:49.584 12:23:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.584 12:23:18 thread -- scripts/common.sh@355 -- # echo 1 00:08:49.584 12:23:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.584 12:23:18 thread -- scripts/common.sh@366 -- # decimal 2 00:08:49.584 12:23:18 thread -- scripts/common.sh@353 -- # local d=2 00:08:49.584 12:23:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.584 12:23:18 thread -- scripts/common.sh@355 -- # echo 2 00:08:49.584 12:23:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.584 12:23:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.584 12:23:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.584 12:23:18 thread -- scripts/common.sh@368 -- # return 0 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.584 --rc genhtml_branch_coverage=1 00:08:49.584 --rc genhtml_function_coverage=1 00:08:49.584 --rc genhtml_legend=1 00:08:49.584 --rc geninfo_all_blocks=1 00:08:49.584 --rc geninfo_unexecuted_blocks=1 00:08:49.584 00:08:49.584 ' 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.584 --rc genhtml_branch_coverage=1 00:08:49.584 --rc genhtml_function_coverage=1 00:08:49.584 --rc genhtml_legend=1 00:08:49.584 --rc geninfo_all_blocks=1 00:08:49.584 --rc geninfo_unexecuted_blocks=1 00:08:49.584 00:08:49.584 ' 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.584 --rc genhtml_branch_coverage=1 00:08:49.584 --rc genhtml_function_coverage=1 00:08:49.584 --rc genhtml_legend=1 00:08:49.584 --rc geninfo_all_blocks=1 00:08:49.584 --rc geninfo_unexecuted_blocks=1 00:08:49.584 00:08:49.584 ' 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.584 --rc genhtml_branch_coverage=1 00:08:49.584 --rc genhtml_function_coverage=1 00:08:49.584 --rc genhtml_legend=1 00:08:49.584 --rc geninfo_all_blocks=1 00:08:49.584 --rc geninfo_unexecuted_blocks=1 00:08:49.584 00:08:49.584 ' 00:08:49.584 12:23:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.584 12:23:18 thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.584 ************************************ 00:08:49.584 START TEST thread_poller_perf 00:08:49.584 ************************************ 00:08:49.584 12:23:18 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:49.584 [2024-11-05 12:23:18.715310] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:49.584 [2024-11-05 12:23:18.715377] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524290 ] 00:08:49.584 [2024-11-05 12:23:18.779959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.842 [2024-11-05 12:23:18.827599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.842 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:50.775 [2024-11-05T11:23:20.014Z] ====================================== 00:08:50.776 [2024-11-05T11:23:20.014Z] busy:2711256297 (cyc) 00:08:50.776 [2024-11-05T11:23:20.014Z] total_run_count: 364000 00:08:50.776 [2024-11-05T11:23:20.014Z] tsc_hz: 2700000000 (cyc) 00:08:50.776 [2024-11-05T11:23:20.014Z] ====================================== 00:08:50.776 [2024-11-05T11:23:20.014Z] poller_cost: 7448 (cyc), 2758 (nsec) 00:08:50.776 00:08:50.776 real 0m1.175s 00:08:50.776 user 0m1.106s 00:08:50.776 sys 0m0.064s 00:08:50.776 12:23:19 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.776 12:23:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:50.776 ************************************ 00:08:50.776 END TEST thread_poller_perf 00:08:50.776 ************************************ 00:08:50.776 12:23:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:50.776 12:23:19 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:50.776 12:23:19 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.776 12:23:19 thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.776 ************************************ 00:08:50.776 START TEST thread_poller_perf 00:08:50.776 ************************************ 00:08:50.776 12:23:19 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:50.776 [2024-11-05 12:23:19.940778] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:50.776 [2024-11-05 12:23:19.940844] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524442 ] 00:08:50.776 [2024-11-05 12:23:20.005454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.034 [2024-11-05 12:23:20.057481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.034 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:51.968 [2024-11-05T11:23:21.206Z] ====================================== 00:08:51.968 [2024-11-05T11:23:21.206Z] busy:2702580378 (cyc) 00:08:51.968 [2024-11-05T11:23:21.206Z] total_run_count: 4500000 00:08:51.968 [2024-11-05T11:23:21.206Z] tsc_hz: 2700000000 (cyc) 00:08:51.968 [2024-11-05T11:23:21.206Z] ====================================== 00:08:51.968 [2024-11-05T11:23:21.206Z] poller_cost: 600 (cyc), 222 (nsec) 00:08:51.968 00:08:51.968 real 0m1.177s 00:08:51.968 user 0m1.108s 00:08:51.968 sys 0m0.063s 00:08:51.968 12:23:21 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.968 12:23:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 ************************************ 00:08:51.968 END TEST thread_poller_perf 00:08:51.968 ************************************ 00:08:51.968 12:23:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:51.968 00:08:51.968 real 0m2.584s 00:08:51.968 user 0m2.347s 00:08:51.968 sys 0m0.240s 00:08:51.968 12:23:21 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.968 12:23:21 thread -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 ************************************ 00:08:51.968 END TEST thread 00:08:51.968 ************************************ 00:08:51.968 12:23:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:51.968 12:23:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:51.968 12:23:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:51.968 12:23:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.968 12:23:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.968 ************************************ 00:08:51.968 START TEST app_cmdline 00:08:51.968 ************************************ 00:08:51.968 12:23:21 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:52.226 * Looking for test storage... 00:08:52.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.226 12:23:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:52.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.226 --rc genhtml_branch_coverage=1 00:08:52.226 --rc genhtml_function_coverage=1 00:08:52.226 --rc genhtml_legend=1 00:08:52.226 --rc geninfo_all_blocks=1 00:08:52.226 --rc geninfo_unexecuted_blocks=1 00:08:52.226 00:08:52.226 ' 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:52.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.226 --rc genhtml_branch_coverage=1 00:08:52.226 --rc genhtml_function_coverage=1 00:08:52.226 --rc genhtml_legend=1 00:08:52.226 --rc geninfo_all_blocks=1 00:08:52.226 --rc geninfo_unexecuted_blocks=1 00:08:52.226 00:08:52.226 ' 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:52.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.226 --rc genhtml_branch_coverage=1 00:08:52.226 --rc genhtml_function_coverage=1 00:08:52.226 --rc genhtml_legend=1 00:08:52.226 --rc geninfo_all_blocks=1 00:08:52.226 --rc geninfo_unexecuted_blocks=1 00:08:52.226 00:08:52.226 ' 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:52.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.226 --rc genhtml_branch_coverage=1 00:08:52.226 --rc genhtml_function_coverage=1 00:08:52.226 --rc genhtml_legend=1 00:08:52.226 --rc geninfo_all_blocks=1 00:08:52.226 --rc geninfo_unexecuted_blocks=1 00:08:52.226 00:08:52.226 ' 00:08:52.226 12:23:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:52.226 12:23:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=524643 00:08:52.226 12:23:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:52.226 12:23:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 524643 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 524643 ']' 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.226 12:23:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:52.226 [2024-11-05 12:23:21.382406] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:52.226 [2024-11-05 12:23:21.382512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524643 ] 00:08:52.226 [2024-11-05 12:23:21.447350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.484 [2024-11-05 12:23:21.495539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.741 12:23:21 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.741 12:23:21 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:52.741 12:23:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:52.998 { 00:08:52.998 "version": "SPDK v25.01-pre git sha1 f220d590c", 00:08:52.998 "fields": { 00:08:52.998 "major": 25, 00:08:52.998 "minor": 1, 00:08:52.999 "patch": 0, 00:08:52.999 "suffix": "-pre", 00:08:52.999 "commit": "f220d590c" 00:08:52.999 } 00:08:52.999 } 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:52.999 12:23:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:52.999 12:23:22 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:53.256 request: 00:08:53.256 { 00:08:53.256 "method": "env_dpdk_get_mem_stats", 00:08:53.256 "req_id": 1 00:08:53.256 } 00:08:53.256 Got JSON-RPC error response 00:08:53.256 response: 00:08:53.256 { 00:08:53.256 "code": -32601, 00:08:53.256 "message": "Method not found" 00:08:53.256 } 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.256 12:23:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 524643 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 524643 ']' 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 524643 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 524643 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 524643' 00:08:53.256 killing process with pid 524643 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@971 -- # kill 524643 00:08:53.256 12:23:22 app_cmdline -- common/autotest_common.sh@976 -- # wait 524643 00:08:53.515 00:08:53.515 real 0m1.579s 00:08:53.515 user 0m1.964s 00:08:53.515 sys 0m0.487s 00:08:53.774 12:23:22 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.774 12:23:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:53.774 ************************************ 00:08:53.774 END TEST app_cmdline 00:08:53.774 ************************************ 00:08:53.774 12:23:22 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:53.774 12:23:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.774 12:23:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.774 12:23:22 -- common/autotest_common.sh@10 -- # set +x 00:08:53.774 ************************************ 00:08:53.774 START TEST version 00:08:53.774 ************************************ 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:53.774 * Looking for test storage... 00:08:53.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.774 12:23:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.774 12:23:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.774 12:23:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.774 12:23:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.774 12:23:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.774 12:23:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.774 12:23:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.774 12:23:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.774 12:23:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.774 12:23:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.774 12:23:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.774 12:23:22 version -- scripts/common.sh@344 -- # case "$op" in 00:08:53.774 12:23:22 version -- scripts/common.sh@345 -- # : 1 00:08:53.774 12:23:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.774 12:23:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.774 12:23:22 version -- scripts/common.sh@365 -- # decimal 1 00:08:53.774 12:23:22 version -- scripts/common.sh@353 -- # local d=1 00:08:53.774 12:23:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.774 12:23:22 version -- scripts/common.sh@355 -- # echo 1 00:08:53.774 12:23:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.774 12:23:22 version -- scripts/common.sh@366 -- # decimal 2 00:08:53.774 12:23:22 version -- scripts/common.sh@353 -- # local d=2 00:08:53.774 12:23:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.774 12:23:22 version -- scripts/common.sh@355 -- # echo 2 00:08:53.774 12:23:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.774 12:23:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.774 12:23:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.774 12:23:22 version -- scripts/common.sh@368 -- # return 0 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.774 --rc genhtml_branch_coverage=1 00:08:53.774 --rc genhtml_function_coverage=1 00:08:53.774 --rc genhtml_legend=1 00:08:53.774 --rc geninfo_all_blocks=1 00:08:53.774 --rc geninfo_unexecuted_blocks=1 00:08:53.774 00:08:53.774 ' 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.774 --rc genhtml_branch_coverage=1 00:08:53.774 --rc genhtml_function_coverage=1 00:08:53.774 --rc genhtml_legend=1 00:08:53.774 --rc geninfo_all_blocks=1 00:08:53.774 --rc geninfo_unexecuted_blocks=1 00:08:53.774 00:08:53.774 ' 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.774 --rc genhtml_branch_coverage=1 00:08:53.774 --rc genhtml_function_coverage=1 00:08:53.774 --rc genhtml_legend=1 00:08:53.774 --rc geninfo_all_blocks=1 00:08:53.774 --rc geninfo_unexecuted_blocks=1 00:08:53.774 00:08:53.774 ' 00:08:53.774 12:23:22 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.774 --rc genhtml_branch_coverage=1 00:08:53.774 --rc genhtml_function_coverage=1 00:08:53.774 --rc genhtml_legend=1 00:08:53.774 --rc geninfo_all_blocks=1 00:08:53.774 --rc geninfo_unexecuted_blocks=1 00:08:53.774 00:08:53.774 ' 00:08:53.774 12:23:22 version -- app/version.sh@17 -- # get_header_version major 00:08:53.774 12:23:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # cut -f2 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:53.774 12:23:22 version -- app/version.sh@17 -- # major=25 00:08:53.774 12:23:22 version -- app/version.sh@18 -- # get_header_version minor 00:08:53.774 12:23:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # cut -f2 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:53.774 12:23:22 version -- app/version.sh@18 -- # minor=1 00:08:53.774 12:23:22 version -- app/version.sh@19 -- # get_header_version patch 00:08:53.774 12:23:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # cut -f2 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:53.774 12:23:22 version -- app/version.sh@19 -- # patch=0 00:08:53.774 12:23:22 version -- app/version.sh@20 -- # get_header_version suffix 00:08:53.774 12:23:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # cut -f2 00:08:53.774 12:23:22 version -- app/version.sh@14 -- # tr -d '"' 00:08:53.774 12:23:22 version -- app/version.sh@20 -- # suffix=-pre 00:08:53.774 12:23:22 version -- app/version.sh@22 -- # version=25.1 00:08:53.774 12:23:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:53.774 12:23:22 version -- app/version.sh@28 -- # version=25.1rc0 00:08:53.774 12:23:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:53.774 12:23:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:53.774 12:23:23 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:53.774 12:23:23 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:53.774 00:08:53.774 real 0m0.191s 00:08:53.774 user 0m0.122s 00:08:53.774 sys 0m0.094s 00:08:53.774 12:23:23 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.774 12:23:23 version -- common/autotest_common.sh@10 -- # set +x 00:08:53.774 ************************************ 00:08:53.774 END TEST version 00:08:53.775 ************************************ 00:08:54.033 12:23:23 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:54.033 12:23:23 -- spdk/autotest.sh@194 -- # uname -s 00:08:54.033 12:23:23 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:54.033 12:23:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:54.033 12:23:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:54.033 12:23:23 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:54.033 12:23:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:54.033 12:23:23 -- common/autotest_common.sh@10 -- # set +x 00:08:54.033 12:23:23 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:54.033 12:23:23 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:54.033 12:23:23 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:54.033 12:23:23 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:54.033 12:23:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.033 12:23:23 -- common/autotest_common.sh@10 -- # set +x 00:08:54.033 ************************************ 00:08:54.033 START TEST nvmf_tcp 00:08:54.033 ************************************ 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:54.033 * Looking for test storage... 00:08:54.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.033 12:23:23 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.033 --rc genhtml_branch_coverage=1 00:08:54.033 --rc genhtml_function_coverage=1 00:08:54.033 --rc genhtml_legend=1 00:08:54.033 --rc geninfo_all_blocks=1 00:08:54.033 --rc geninfo_unexecuted_blocks=1 00:08:54.033 00:08:54.033 ' 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.033 --rc genhtml_branch_coverage=1 00:08:54.033 --rc genhtml_function_coverage=1 00:08:54.033 --rc genhtml_legend=1 00:08:54.033 --rc geninfo_all_blocks=1 00:08:54.033 --rc geninfo_unexecuted_blocks=1 00:08:54.033 00:08:54.033 ' 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.033 --rc genhtml_branch_coverage=1 00:08:54.033 --rc genhtml_function_coverage=1 00:08:54.033 --rc genhtml_legend=1 00:08:54.033 --rc geninfo_all_blocks=1 00:08:54.033 --rc geninfo_unexecuted_blocks=1 00:08:54.033 00:08:54.033 ' 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.033 --rc genhtml_branch_coverage=1 00:08:54.033 --rc genhtml_function_coverage=1 00:08:54.033 --rc genhtml_legend=1 00:08:54.033 --rc geninfo_all_blocks=1 00:08:54.033 --rc geninfo_unexecuted_blocks=1 00:08:54.033 00:08:54.033 ' 00:08:54.033 12:23:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:54.033 12:23:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:54.033 12:23:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.033 12:23:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:54.033 ************************************ 00:08:54.033 START TEST nvmf_target_core 00:08:54.033 ************************************ 00:08:54.033 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:54.292 * Looking for test storage... 00:08:54.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:54.292 12:23:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.293 ************************************ 00:08:54.293 START TEST nvmf_abort 00:08:54.293 ************************************ 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:54.293 * Looking for test storage... 00:08:54.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.293 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.552 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.553 --rc genhtml_branch_coverage=1 00:08:54.553 --rc genhtml_function_coverage=1 00:08:54.553 --rc genhtml_legend=1 00:08:54.553 --rc geninfo_all_blocks=1 00:08:54.553 --rc geninfo_unexecuted_blocks=1 00:08:54.553 00:08:54.553 ' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.553 --rc genhtml_branch_coverage=1 00:08:54.553 --rc genhtml_function_coverage=1 00:08:54.553 --rc genhtml_legend=1 00:08:54.553 --rc geninfo_all_blocks=1 00:08:54.553 --rc geninfo_unexecuted_blocks=1 00:08:54.553 00:08:54.553 ' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.553 --rc genhtml_branch_coverage=1 00:08:54.553 --rc genhtml_function_coverage=1 00:08:54.553 --rc genhtml_legend=1 00:08:54.553 --rc geninfo_all_blocks=1 00:08:54.553 --rc geninfo_unexecuted_blocks=1 00:08:54.553 00:08:54.553 ' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.553 --rc genhtml_branch_coverage=1 00:08:54.553 --rc genhtml_function_coverage=1 00:08:54.553 --rc genhtml_legend=1 00:08:54.553 --rc geninfo_all_blocks=1 00:08:54.553 --rc geninfo_unexecuted_blocks=1 00:08:54.553 00:08:54.553 ' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.553 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.554 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:54.554 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:54.554 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:54.554 12:23:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.086 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.086 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.086 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.086 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:57.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:57.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:57.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:57.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:08:57.087 00:08:57.087 --- 10.0.0.2 ping statistics --- 00:08:57.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.087 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:08:57.087 00:08:57.087 --- 10.0.0.1 ping statistics --- 00:08:57.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.087 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.087 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=526732 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 526732 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 526732 ']' 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.088 12:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 [2024-11-05 12:23:25.979066] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:08:57.088 [2024-11-05 12:23:25.979146] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.088 [2024-11-05 12:23:26.049682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:57.088 [2024-11-05 12:23:26.093996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.088 [2024-11-05 12:23:26.094059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.088 [2024-11-05 12:23:26.094073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.088 [2024-11-05 12:23:26.094084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.088 [2024-11-05 12:23:26.094093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.088 [2024-11-05 12:23:26.095547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.088 [2024-11-05 12:23:26.095656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.088 [2024-11-05 12:23:26.095653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 [2024-11-05 12:23:26.236295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 Malloc0 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 Delay0 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 [2024-11-05 12:23:26.301532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.088 12:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:57.346 [2024-11-05 12:23:26.407199] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:59.244 Initializing NVMe Controllers 00:08:59.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:59.244 controller IO queue size 128 less than required 00:08:59.244 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:59.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:59.244 Initialization complete. Launching workers. 00:08:59.244 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25281 00:08:59.244 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25342, failed to submit 62 00:08:59.244 success 25285, unsuccessful 57, failed 0 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.244 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.244 rmmod nvme_tcp 00:08:59.244 rmmod nvme_fabrics 00:08:59.244 rmmod nvme_keyring 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 526732 ']' 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 526732 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 526732 ']' 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 526732 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 526732 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 526732' 00:08:59.502 killing process with pid 526732 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 526732 00:08:59.502 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 526732 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.763 12:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.670 00:09:01.670 real 0m7.368s 00:09:01.670 user 0m10.317s 00:09:01.670 sys 0m2.644s 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:01.670 ************************************ 00:09:01.670 END TEST nvmf_abort 00:09:01.670 ************************************ 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.670 ************************************ 00:09:01.670 START TEST nvmf_ns_hotplug_stress 00:09:01.670 ************************************ 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:01.670 * Looking for test storage... 00:09:01.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.670 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.929 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.930 --rc genhtml_branch_coverage=1 00:09:01.930 --rc genhtml_function_coverage=1 00:09:01.930 --rc genhtml_legend=1 00:09:01.930 --rc geninfo_all_blocks=1 00:09:01.930 --rc geninfo_unexecuted_blocks=1 00:09:01.930 00:09:01.930 ' 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.930 --rc genhtml_branch_coverage=1 00:09:01.930 --rc genhtml_function_coverage=1 00:09:01.930 --rc genhtml_legend=1 00:09:01.930 --rc geninfo_all_blocks=1 00:09:01.930 --rc geninfo_unexecuted_blocks=1 00:09:01.930 00:09:01.930 ' 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.930 --rc genhtml_branch_coverage=1 00:09:01.930 --rc genhtml_function_coverage=1 00:09:01.930 --rc genhtml_legend=1 00:09:01.930 --rc geninfo_all_blocks=1 00:09:01.930 --rc geninfo_unexecuted_blocks=1 00:09:01.930 00:09:01.930 ' 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.930 --rc genhtml_branch_coverage=1 00:09:01.930 --rc genhtml_function_coverage=1 00:09:01.930 --rc genhtml_legend=1 00:09:01.930 --rc geninfo_all_blocks=1 00:09:01.930 --rc geninfo_unexecuted_blocks=1 00:09:01.930 00:09:01.930 ' 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.930 12:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.930 12:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:04.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:04.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:04.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:04.464 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:04.465 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:04.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:09:04.465 00:09:04.465 --- 10.0.0.2 ping statistics --- 00:09:04.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.465 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:09:04.465 00:09:04.465 --- 10.0.0.1 ping statistics --- 00:09:04.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.465 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=529089 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 529089 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 529089 ']' 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.465 [2024-11-05 12:23:33.422546] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:09:04.465 [2024-11-05 12:23:33.422638] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.465 [2024-11-05 12:23:33.495464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.465 [2024-11-05 12:23:33.543465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.465 [2024-11-05 12:23:33.543532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.465 [2024-11-05 12:23:33.543545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.465 [2024-11-05 12:23:33.543556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.465 [2024-11-05 12:23:33.543567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.465 [2024-11-05 12:23:33.545063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.465 [2024-11-05 12:23:33.545126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.465 [2024-11-05 12:23:33.545129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:04.465 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:04.723 [2024-11-05 12:23:33.934736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:04.723 12:23:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.288 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.288 [2024-11-05 12:23:34.469532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.288 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.545 12:23:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:05.803 Malloc0 00:09:06.060 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:06.060 Delay0 00:09:06.318 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.575 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:06.832 NULL1 00:09:06.832 12:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:07.089 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=529393 00:09:07.089 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:07.089 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:07.089 12:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.461 Read completed with error (sct=0, sc=11) 00:09:08.461 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.461 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:08.461 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:08.718 true 00:09:08.718 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:08.718 12:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.650 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.908 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:09.908 12:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:09.908 true 00:09:10.165 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:10.165 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.423 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.680 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:10.680 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:10.937 true 00:09:10.937 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:10.937 12:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.194 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.452 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:11.452 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:11.709 true 00:09:11.709 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:11.709 12:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.641 12:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.156 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:13.156 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:13.156 true 00:09:13.413 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:13.413 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.671 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.928 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:13.928 12:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:14.185 true 00:09:14.185 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:14.185 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.443 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.700 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:14.700 12:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:14.958 true 00:09:14.958 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:14.958 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.890 12:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.147 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:16.147 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:16.404 true 00:09:16.405 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:16.405 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.662 12:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.919 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:16.919 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:17.176 true 00:09:17.176 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:17.176 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.433 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.691 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:17.691 12:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:17.948 true 00:09:17.948 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:17.948 12:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.880 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.880 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.137 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:19.137 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:19.394 true 00:09:19.394 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:19.395 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.652 12:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.909 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:19.909 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:20.166 true 00:09:20.166 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:20.166 12:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.097 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.354 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:21.354 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:21.612 true 00:09:21.612 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:21.612 12:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.869 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.126 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:22.126 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:22.383 true 00:09:22.640 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:22.641 12:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.573 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.573 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:23.573 12:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:23.830 true 00:09:23.830 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:23.830 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.395 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.395 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:24.395 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:24.652 true 00:09:24.652 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:24.652 12:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.909 12:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.474 12:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:25.474 12:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:25.474 true 00:09:25.474 12:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:25.474 12:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.414 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.673 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.673 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:26.673 12:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:26.931 true 00:09:26.931 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:26.931 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.189 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.756 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:27.756 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:27.756 true 00:09:27.756 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:27.756 12:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.693 12:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.951 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:28.951 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:29.209 true 00:09:29.209 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:29.209 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.468 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.726 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:29.726 12:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:29.984 true 00:09:29.984 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:29.984 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.242 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.499 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:30.499 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:30.757 true 00:09:30.757 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:30.757 12:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.133 12:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.133 12:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:32.133 12:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:32.409 true 00:09:32.410 12:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:32.410 12:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.429 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.429 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:33.429 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:33.724 true 00:09:33.724 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:33.724 12:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.983 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.241 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:34.241 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:34.499 true 00:09:34.499 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:34.499 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.758 12:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.016 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:35.016 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:35.274 true 00:09:35.274 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:35.274 12:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.214 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.472 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:36.472 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:36.730 true 00:09:36.730 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:36.730 12:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.988 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.247 Initializing NVMe Controllers 00:09:37.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.247 Controller IO queue size 128, less than required. 00:09:37.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:37.247 Controller IO queue size 128, less than required. 00:09:37.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:37.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:37.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:37.247 Initialization complete. Launching workers. 00:09:37.247 ======================================================== 00:09:37.247 Latency(us) 00:09:37.247 Device Information : IOPS MiB/s Average min max 00:09:37.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 971.68 0.47 60465.10 3389.64 1012636.55 00:09:37.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9526.81 4.65 13435.47 3378.84 454935.70 00:09:37.247 ======================================================== 00:09:37.247 Total : 10498.49 5.13 17788.25 3378.84 1012636.55 00:09:37.247 00:09:37.247 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:37.247 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:37.505 true 00:09:37.764 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 529393 00:09:37.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (529393) - No such process 00:09:37.764 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 529393 00:09:37.764 12:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.022 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.280 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:38.280 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:38.280 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:38.280 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:38.280 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:38.538 null0 00:09:38.538 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:38.538 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:38.538 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:38.796 null1 00:09:38.796 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:38.796 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:38.796 12:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:39.054 null2 00:09:39.054 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:39.054 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:39.054 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:39.312 null3 00:09:39.312 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:39.312 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:39.312 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:39.571 null4 00:09:39.571 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:39.571 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:39.571 12:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:39.829 null5 00:09:39.829 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:39.829 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:39.829 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:40.088 null6 00:09:40.088 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:40.088 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:40.088 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:40.346 null7 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.346 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 534081 534082 534084 534086 534088 534090 534092 534094 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.347 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:40.914 12:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.172 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:41.431 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:41.689 12:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:41.948 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.206 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.207 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:42.465 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:42.725 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.725 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.725 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:42.983 12:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:43.242 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:43.501 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:43.502 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:43.760 12:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:44.017 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.018 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:44.275 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:44.842 12:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:44.842 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:45.101 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.359 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:45.617 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:45.617 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:45.617 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:45.617 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:45.617 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:45.618 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:45.618 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.618 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:45.876 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.135 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.394 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.652 rmmod nvme_tcp 00:09:46.652 rmmod nvme_fabrics 00:09:46.652 rmmod nvme_keyring 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 529089 ']' 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 529089 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 529089 ']' 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 529089 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 529089 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 529089' 00:09:46.652 killing process with pid 529089 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 529089 00:09:46.652 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 529089 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.913 12:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.823 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.823 00:09:48.823 real 0m47.133s 00:09:48.823 user 3m39.137s 00:09:48.823 sys 0m15.924s 00:09:48.823 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.823 12:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.823 ************************************ 00:09:48.823 END TEST nvmf_ns_hotplug_stress 00:09:48.823 ************************************ 00:09:48.823 12:24:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:48.823 12:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:48.823 12:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.823 12:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:48.823 ************************************ 00:09:48.823 START TEST nvmf_delete_subsystem 00:09:48.823 ************************************ 00:09:48.823 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:49.084 * Looking for test storage... 00:09:49.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:49.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.084 --rc genhtml_branch_coverage=1 00:09:49.084 --rc genhtml_function_coverage=1 00:09:49.084 --rc genhtml_legend=1 00:09:49.084 --rc geninfo_all_blocks=1 00:09:49.084 --rc geninfo_unexecuted_blocks=1 00:09:49.084 00:09:49.084 ' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:49.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.084 --rc genhtml_branch_coverage=1 00:09:49.084 --rc genhtml_function_coverage=1 00:09:49.084 --rc genhtml_legend=1 00:09:49.084 --rc geninfo_all_blocks=1 00:09:49.084 --rc geninfo_unexecuted_blocks=1 00:09:49.084 00:09:49.084 ' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:49.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.084 --rc genhtml_branch_coverage=1 00:09:49.084 --rc genhtml_function_coverage=1 00:09:49.084 --rc genhtml_legend=1 00:09:49.084 --rc geninfo_all_blocks=1 00:09:49.084 --rc geninfo_unexecuted_blocks=1 00:09:49.084 00:09:49.084 ' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:49.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.084 --rc genhtml_branch_coverage=1 00:09:49.084 --rc genhtml_function_coverage=1 00:09:49.084 --rc genhtml_legend=1 00:09:49.084 --rc geninfo_all_blocks=1 00:09:49.084 --rc geninfo_unexecuted_blocks=1 00:09:49.084 00:09:49.084 ' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.084 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.085 12:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:51.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:51.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:51.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:51.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.623 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:51.624 00:09:51.624 --- 10.0.0.2 ping statistics --- 00:09:51.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.624 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:09:51.624 00:09:51.624 --- 10.0.0.1 ping statistics --- 00:09:51.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.624 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=536982 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 536982 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 536982 ']' 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.624 [2024-11-05 12:24:20.577586] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:09:51.624 [2024-11-05 12:24:20.577666] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.624 [2024-11-05 12:24:20.665184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.624 [2024-11-05 12:24:20.715362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.624 [2024-11-05 12:24:20.715428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.624 [2024-11-05 12:24:20.715460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.624 [2024-11-05 12:24:20.715479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.624 [2024-11-05 12:24:20.715495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.624 [2024-11-05 12:24:20.717211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.624 [2024-11-05 12:24:20.717219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.624 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 [2024-11-05 12:24:20.863838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 [2024-11-05 12:24:20.880106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 NULL1 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 Delay0 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=537010 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:51.882 12:24:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:51.882 [2024-11-05 12:24:20.964898] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:53.780 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.780 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.780 12:24:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 Read completed with error (sct=0, sc=8) 00:09:54.040 starting I/O failed: -6 00:09:54.040 starting I/O failed: -6 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.040 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 Read completed with error (sct=0, sc=8) 00:09:54.041 Write completed with error (sct=0, sc=8) 00:09:54.041 starting I/O failed: -6 00:09:54.041 [2024-11-05 12:24:23.089301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d150 is same with the state(6) to be set 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.041 starting I/O failed: -6 00:09:54.975 [2024-11-05 12:24:24.060312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5b190 is same with the state(6) to be set 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 [2024-11-05 12:24:24.090155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eff2000d020 is same with the state(6) to be set 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 [2024-11-05 12:24:24.090429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eff2000d680 is same with the state(6) to be set 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 [2024-11-05 12:24:24.090686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5d510 is same with the state(6) to be set 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.975 Write completed with error (sct=0, sc=8) 00:09:54.975 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Write completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 Read completed with error (sct=0, sc=8) 00:09:54.976 [2024-11-05 12:24:24.091160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5cf70 is same with the state(6) to be set 00:09:54.976 Initializing NVMe Controllers 00:09:54.976 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.976 Controller IO queue size 128, less than required. 00:09:54.976 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:54.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:54.976 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:54.976 Initialization complete. Launching workers. 00:09:54.976 ======================================================== 00:09:54.976 Latency(us) 00:09:54.976 Device Information : IOPS MiB/s Average min max 00:09:54.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.54 0.09 907073.36 673.01 1012769.49 00:09:54.976 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 186.04 0.09 906728.66 682.17 1014109.47 00:09:54.976 ======================================================== 00:09:54.976 Total : 372.58 0.18 906901.24 673.01 1014109.47 00:09:54.976 00:09:54.976 [2024-11-05 12:24:24.091965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b190 (9): Bad file descriptor 00:09:54.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:54.976 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.976 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:54.976 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 537010 00:09:54.976 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 537010 00:09:55.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (537010) - No such process 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 537010 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 537010 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 537010 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.542 [2024-11-05 12:24:24.612310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=537535 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:55.542 12:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.542 [2024-11-05 12:24:24.677974] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:56.107 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.107 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:56.107 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.673 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.673 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:56.673 12:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.931 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.931 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:56.931 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.496 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.496 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:57.496 12:24:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:58.062 12:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:58.062 12:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:58.062 12:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:58.628 12:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:58.628 12:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:58.628 12:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:58.886 Initializing NVMe Controllers 00:09:58.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.886 Controller IO queue size 128, less than required. 00:09:58.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:58.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:58.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:58.886 Initialization complete. Launching workers. 00:09:58.886 ======================================================== 00:09:58.886 Latency(us) 00:09:58.886 Device Information : IOPS MiB/s Average min max 00:09:58.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004793.19 1000155.57 1043117.75 00:09:58.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004580.20 1000174.36 1041618.85 00:09:58.886 ======================================================== 00:09:58.886 Total : 256.00 0.12 1004686.70 1000155.57 1043117.75 00:09:58.886 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 537535 00:09:59.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (537535) - No such process 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 537535 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.144 rmmod nvme_tcp 00:09:59.144 rmmod nvme_fabrics 00:09:59.144 rmmod nvme_keyring 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 536982 ']' 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 536982 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 536982 ']' 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 536982 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 536982 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 536982' 00:09:59.144 killing process with pid 536982 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 536982 00:09:59.144 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 536982 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.403 12:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.314 00:10:01.314 real 0m12.457s 00:10:01.314 user 0m27.980s 00:10:01.314 sys 0m3.039s 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.314 ************************************ 00:10:01.314 END TEST nvmf_delete_subsystem 00:10:01.314 ************************************ 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.314 ************************************ 00:10:01.314 START TEST nvmf_host_management 00:10:01.314 ************************************ 00:10:01.314 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:01.574 * Looking for test storage... 00:10:01.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.574 --rc genhtml_branch_coverage=1 00:10:01.574 --rc genhtml_function_coverage=1 00:10:01.574 --rc genhtml_legend=1 00:10:01.574 --rc geninfo_all_blocks=1 00:10:01.574 --rc geninfo_unexecuted_blocks=1 00:10:01.574 00:10:01.574 ' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.574 --rc genhtml_branch_coverage=1 00:10:01.574 --rc genhtml_function_coverage=1 00:10:01.574 --rc genhtml_legend=1 00:10:01.574 --rc geninfo_all_blocks=1 00:10:01.574 --rc geninfo_unexecuted_blocks=1 00:10:01.574 00:10:01.574 ' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.574 --rc genhtml_branch_coverage=1 00:10:01.574 --rc genhtml_function_coverage=1 00:10:01.574 --rc genhtml_legend=1 00:10:01.574 --rc geninfo_all_blocks=1 00:10:01.574 --rc geninfo_unexecuted_blocks=1 00:10:01.574 00:10:01.574 ' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:01.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.574 --rc genhtml_branch_coverage=1 00:10:01.574 --rc genhtml_function_coverage=1 00:10:01.574 --rc genhtml_legend=1 00:10:01.574 --rc geninfo_all_blocks=1 00:10:01.574 --rc geninfo_unexecuted_blocks=1 00:10:01.574 00:10:01.574 ' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.574 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.575 12:24:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.112 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:04.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:04.113 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:04.113 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:04.113 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:04.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:10:04.113 00:10:04.113 --- 10.0.0.2 ping statistics --- 00:10:04.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.113 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:10:04.113 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:10:04.113 00:10:04.113 --- 10.0.0.1 ping statistics --- 00:10:04.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.114 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=539888 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 539888 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 539888 ']' 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.114 12:24:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 [2024-11-05 12:24:32.973670] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:04.114 [2024-11-05 12:24:32.973767] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.114 [2024-11-05 12:24:33.043754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.114 [2024-11-05 12:24:33.091340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.114 [2024-11-05 12:24:33.091398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.114 [2024-11-05 12:24:33.091420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.114 [2024-11-05 12:24:33.091431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.114 [2024-11-05 12:24:33.091441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.114 [2024-11-05 12:24:33.093123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.114 [2024-11-05 12:24:33.093257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.114 [2024-11-05 12:24:33.093326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:04.114 [2024-11-05 12:24:33.093329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 [2024-11-05 12:24:33.234900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 Malloc0 00:10:04.114 [2024-11-05 12:24:33.303748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=539941 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 539941 /var/tmp/bdevperf.sock 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 539941 ']' 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:04.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.114 { 00:10:04.114 "params": { 00:10:04.114 "name": "Nvme$subsystem", 00:10:04.114 "trtype": "$TEST_TRANSPORT", 00:10:04.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.114 "adrfam": "ipv4", 00:10:04.114 "trsvcid": "$NVMF_PORT", 00:10:04.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.114 "hdgst": ${hdgst:-false}, 00:10:04.114 "ddgst": ${ddgst:-false} 00:10:04.114 }, 00:10:04.114 "method": "bdev_nvme_attach_controller" 00:10:04.114 } 00:10:04.114 EOF 00:10:04.114 )") 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:04.114 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.114 "params": { 00:10:04.114 "name": "Nvme0", 00:10:04.114 "trtype": "tcp", 00:10:04.114 "traddr": "10.0.0.2", 00:10:04.114 "adrfam": "ipv4", 00:10:04.114 "trsvcid": "4420", 00:10:04.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:04.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:04.114 "hdgst": false, 00:10:04.114 "ddgst": false 00:10:04.114 }, 00:10:04.114 "method": "bdev_nvme_attach_controller" 00:10:04.114 }' 00:10:04.372 [2024-11-05 12:24:33.379483] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:04.372 [2024-11-05 12:24:33.379572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid539941 ] 00:10:04.372 [2024-11-05 12:24:33.449428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.372 [2024-11-05 12:24:33.496380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.630 Running I/O for 10 seconds... 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:04.630 12:24:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=542 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 542 -ge 100 ']' 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.887 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:04.887 [2024-11-05 12:24:34.120315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.887 [2024-11-05 12:24:34.120393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.887 [2024-11-05 12:24:34.120412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.887 [2024-11-05 12:24:34.120426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.887 [2024-11-05 12:24:34.120440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.887 [2024-11-05 12:24:34.120463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.887 [2024-11-05 12:24:34.120477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:04.887 [2024-11-05 12:24:34.120490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:04.888 [2024-11-05 12:24:34.120504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1970 is same with the state(6) to be set 00:10:04.888 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.888 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:04.888 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.888 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:05.146 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.146 12:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:05.147 [2024-11-05 12:24:34.131658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f1970 (9): Bad file descriptor 00:10:05.147 [2024-11-05 12:24:34.131764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.131789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.131815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.131830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.131854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.131889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.131911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.131927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.131942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.131956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.131972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.131987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.132975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.132992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.147 [2024-11-05 12:24:34.133008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.147 [2024-11-05 12:24:34.133022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.133740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.148 [2024-11-05 12:24:34.133754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.148 [2024-11-05 12:24:34.134980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:05.148 task offset: 81920 on job bdev=Nvme0n1 fails 00:10:05.148 00:10:05.148 Latency(us) 00:10:05.148 [2024-11-05T11:24:34.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.148 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:05.148 Job: Nvme0n1 ended in about 0.42 seconds with error 00:10:05.148 Verification LBA range: start 0x0 length 0x400 00:10:05.148 Nvme0n1 : 0.42 1541.59 96.35 154.16 0.00 36685.45 2524.35 34564.17 00:10:05.148 [2024-11-05T11:24:34.386Z] =================================================================================================================== 00:10:05.148 [2024-11-05T11:24:34.386Z] Total : 1541.59 96.35 154.16 0.00 36685.45 2524.35 34564.17 00:10:05.148 [2024-11-05 12:24:34.136869] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:05.148 [2024-11-05 12:24:34.239990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 539941 00:10:06.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (539941) - No such process 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.082 { 00:10:06.082 "params": { 00:10:06.082 "name": "Nvme$subsystem", 00:10:06.082 "trtype": "$TEST_TRANSPORT", 00:10:06.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.082 "adrfam": "ipv4", 00:10:06.082 "trsvcid": "$NVMF_PORT", 00:10:06.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.082 "hdgst": ${hdgst:-false}, 00:10:06.082 "ddgst": ${ddgst:-false} 00:10:06.082 }, 00:10:06.082 "method": "bdev_nvme_attach_controller" 00:10:06.082 } 00:10:06.082 EOF 00:10:06.082 )") 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:06.082 12:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.082 "params": { 00:10:06.082 "name": "Nvme0", 00:10:06.082 "trtype": "tcp", 00:10:06.082 "traddr": "10.0.0.2", 00:10:06.082 "adrfam": "ipv4", 00:10:06.082 "trsvcid": "4420", 00:10:06.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:06.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:06.082 "hdgst": false, 00:10:06.082 "ddgst": false 00:10:06.082 }, 00:10:06.082 "method": "bdev_nvme_attach_controller" 00:10:06.082 }' 00:10:06.082 [2024-11-05 12:24:35.182947] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:06.082 [2024-11-05 12:24:35.183034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid540212 ] 00:10:06.082 [2024-11-05 12:24:35.252830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.083 [2024-11-05 12:24:35.300797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.344 Running I/O for 1 seconds... 00:10:07.718 1630.00 IOPS, 101.88 MiB/s 00:10:07.718 Latency(us) 00:10:07.718 [2024-11-05T11:24:36.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.719 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:07.719 Verification LBA range: start 0x0 length 0x400 00:10:07.719 Nvme0n1 : 1.02 1668.24 104.26 0.00 0.00 37568.24 2524.35 33010.73 00:10:07.719 [2024-11-05T11:24:36.957Z] =================================================================================================================== 00:10:07.719 [2024-11-05T11:24:36.957Z] Total : 1668.24 104.26 0.00 0.00 37568.24 2524.35 33010.73 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.719 rmmod nvme_tcp 00:10:07.719 rmmod nvme_fabrics 00:10:07.719 rmmod nvme_keyring 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 539888 ']' 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 539888 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 539888 ']' 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 539888 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 539888 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 539888' 00:10:07.719 killing process with pid 539888 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 539888 00:10:07.719 12:24:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 539888 00:10:07.979 [2024-11-05 12:24:37.032029] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.979 12:24:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.890 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.890 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:09.890 00:10:09.890 real 0m8.570s 00:10:09.890 user 0m18.742s 00:10:09.890 sys 0m2.749s 00:10:09.890 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.890 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:09.890 ************************************ 00:10:09.890 END TEST nvmf_host_management 00:10:09.890 ************************************ 00:10:10.149 12:24:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:10.149 12:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:10.149 12:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.149 12:24:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.149 ************************************ 00:10:10.149 START TEST nvmf_lvol 00:10:10.150 ************************************ 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:10.150 * Looking for test storage... 00:10:10.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.150 --rc genhtml_branch_coverage=1 00:10:10.150 --rc genhtml_function_coverage=1 00:10:10.150 --rc genhtml_legend=1 00:10:10.150 --rc geninfo_all_blocks=1 00:10:10.150 --rc geninfo_unexecuted_blocks=1 00:10:10.150 00:10:10.150 ' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.150 --rc genhtml_branch_coverage=1 00:10:10.150 --rc genhtml_function_coverage=1 00:10:10.150 --rc genhtml_legend=1 00:10:10.150 --rc geninfo_all_blocks=1 00:10:10.150 --rc geninfo_unexecuted_blocks=1 00:10:10.150 00:10:10.150 ' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.150 --rc genhtml_branch_coverage=1 00:10:10.150 --rc genhtml_function_coverage=1 00:10:10.150 --rc genhtml_legend=1 00:10:10.150 --rc geninfo_all_blocks=1 00:10:10.150 --rc geninfo_unexecuted_blocks=1 00:10:10.150 00:10:10.150 ' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.150 --rc genhtml_branch_coverage=1 00:10:10.150 --rc genhtml_function_coverage=1 00:10:10.150 --rc genhtml_legend=1 00:10:10.150 --rc geninfo_all_blocks=1 00:10:10.150 --rc geninfo_unexecuted_blocks=1 00:10:10.150 00:10:10.150 ' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.150 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.151 12:24:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:12.683 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:12.683 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:12.683 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:12.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:12.683 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:10:12.684 00:10:12.684 --- 10.0.0.2 ping statistics --- 00:10:12.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.684 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:10:12.684 00:10:12.684 --- 10.0.0.1 ping statistics --- 00:10:12.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.684 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=542421 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 542421 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 542421 ']' 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:12.684 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.684 [2024-11-05 12:24:41.707694] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:12.684 [2024-11-05 12:24:41.707782] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.684 [2024-11-05 12:24:41.783829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.684 [2024-11-05 12:24:41.832813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.684 [2024-11-05 12:24:41.832891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.684 [2024-11-05 12:24:41.832917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.684 [2024-11-05 12:24:41.832928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.684 [2024-11-05 12:24:41.832938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.684 [2024-11-05 12:24:41.834312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.684 [2024-11-05 12:24:41.835882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.684 [2024-11-05 12:24:41.835918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.942 12:24:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.200 [2024-11-05 12:24:42.222099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.200 12:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.459 12:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:13.459 12:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:13.717 12:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:13.717 12:24:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:13.974 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:14.233 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=42235f5f-77cd-46d6-bb5a-1eba68251da9 00:10:14.233 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42235f5f-77cd-46d6-bb5a-1eba68251da9 lvol 20 00:10:14.490 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8130e51e-9d98-4db5-b254-a24bffbab4cb 00:10:14.490 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:14.748 12:24:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8130e51e-9d98-4db5-b254-a24bffbab4cb 00:10:15.005 12:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:15.263 [2024-11-05 12:24:44.450043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.263 12:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.521 12:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=542732 00:10:15.521 12:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:15.521 12:24:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:16.894 12:24:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8130e51e-9d98-4db5-b254-a24bffbab4cb MY_SNAPSHOT 00:10:16.894 12:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=060a9dea-edb2-48b7-a88d-4f87172cc9be 00:10:16.894 12:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8130e51e-9d98-4db5-b254-a24bffbab4cb 30 00:10:17.152 12:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 060a9dea-edb2-48b7-a88d-4f87172cc9be MY_CLONE 00:10:17.722 12:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4543af7d-a81b-4235-b95d-13568b8881e1 00:10:17.722 12:24:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4543af7d-a81b-4235-b95d-13568b8881e1 00:10:18.287 12:24:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 542732 00:10:26.394 Initializing NVMe Controllers 00:10:26.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:26.394 Controller IO queue size 128, less than required. 00:10:26.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:26.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:26.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:26.394 Initialization complete. Launching workers. 00:10:26.394 ======================================================== 00:10:26.394 Latency(us) 00:10:26.394 Device Information : IOPS MiB/s Average min max 00:10:26.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10322.30 40.32 12401.69 2175.29 120267.72 00:10:26.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10454.90 40.84 12247.87 2239.17 49589.99 00:10:26.394 ======================================================== 00:10:26.394 Total : 20777.20 81.16 12324.29 2175.29 120267.72 00:10:26.394 00:10:26.394 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.394 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8130e51e-9d98-4db5-b254-a24bffbab4cb 00:10:26.651 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42235f5f-77cd-46d6-bb5a-1eba68251da9 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:26.909 12:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:26.909 rmmod nvme_tcp 00:10:26.909 rmmod nvme_fabrics 00:10:26.909 rmmod nvme_keyring 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 542421 ']' 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 542421 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 542421 ']' 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 542421 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 542421 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 542421' 00:10:26.909 killing process with pid 542421 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 542421 00:10:26.909 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 542421 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.167 12:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.703 00:10:29.703 real 0m19.217s 00:10:29.703 user 1m5.221s 00:10:29.703 sys 0m5.648s 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:29.703 ************************************ 00:10:29.703 END TEST nvmf_lvol 00:10:29.703 ************************************ 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.703 ************************************ 00:10:29.703 START TEST nvmf_lvs_grow 00:10:29.703 ************************************ 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:29.703 * Looking for test storage... 00:10:29.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.703 --rc genhtml_branch_coverage=1 00:10:29.703 --rc genhtml_function_coverage=1 00:10:29.703 --rc genhtml_legend=1 00:10:29.703 --rc geninfo_all_blocks=1 00:10:29.703 --rc geninfo_unexecuted_blocks=1 00:10:29.703 00:10:29.703 ' 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.703 --rc genhtml_branch_coverage=1 00:10:29.703 --rc genhtml_function_coverage=1 00:10:29.703 --rc genhtml_legend=1 00:10:29.703 --rc geninfo_all_blocks=1 00:10:29.703 --rc geninfo_unexecuted_blocks=1 00:10:29.703 00:10:29.703 ' 00:10:29.703 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.703 --rc genhtml_branch_coverage=1 00:10:29.703 --rc genhtml_function_coverage=1 00:10:29.703 --rc genhtml_legend=1 00:10:29.703 --rc geninfo_all_blocks=1 00:10:29.704 --rc geninfo_unexecuted_blocks=1 00:10:29.704 00:10:29.704 ' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:29.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.704 --rc genhtml_branch_coverage=1 00:10:29.704 --rc genhtml_function_coverage=1 00:10:29.704 --rc genhtml_legend=1 00:10:29.704 --rc geninfo_all_blocks=1 00:10:29.704 --rc geninfo_unexecuted_blocks=1 00:10:29.704 00:10:29.704 ' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:29.704 12:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:31.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:31.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:31.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:31.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.613 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:10:31.873 00:10:31.873 --- 10.0.0.2 ping statistics --- 00:10:31.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.873 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:31.873 00:10:31.873 --- 10.0.0.1 ping statistics --- 00:10:31.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.873 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:31.873 12:25:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=546148 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 546148 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 546148 ']' 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:31.873 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.873 [2024-11-05 12:25:01.057439] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:31.873 [2024-11-05 12:25:01.057534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.131 [2024-11-05 12:25:01.132886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.131 [2024-11-05 12:25:01.178660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.131 [2024-11-05 12:25:01.178714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.131 [2024-11-05 12:25:01.178737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.131 [2024-11-05 12:25:01.178748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.131 [2024-11-05 12:25:01.178758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.131 [2024-11-05 12:25:01.179333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.131 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:32.415 [2024-11-05 12:25:01.558401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:32.415 ************************************ 00:10:32.415 START TEST lvs_grow_clean 00:10:32.415 ************************************ 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:32.415 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:32.726 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:32.726 12:25:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:33.033 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:33.034 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:33.034 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:33.317 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:33.318 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:33.318 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f lvol 150 00:10:33.576 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d323f6cf-745f-418f-8b63-7c1f0e2fdfad 00:10:33.576 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:33.576 12:25:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:33.834 [2024-11-05 12:25:02.992292] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:33.834 [2024-11-05 12:25:02.992369] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:33.834 true 00:10:33.834 12:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:33.834 12:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:34.091 12:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:34.091 12:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:34.349 12:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d323f6cf-745f-418f-8b63-7c1f0e2fdfad 00:10:34.607 12:25:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:34.865 [2024-11-05 12:25:04.067601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.865 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=546602 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 546602 /var/tmp/bdevperf.sock 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 546602 ']' 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:35.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:35.123 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:35.381 [2024-11-05 12:25:04.398600] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:35.381 [2024-11-05 12:25:04.398674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546602 ] 00:10:35.381 [2024-11-05 12:25:04.464491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.381 [2024-11-05 12:25:04.509115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.644 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:35.644 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:35.644 12:25:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:35.901 Nvme0n1 00:10:35.901 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:36.159 [ 00:10:36.159 { 00:10:36.159 "name": "Nvme0n1", 00:10:36.159 "aliases": [ 00:10:36.159 "d323f6cf-745f-418f-8b63-7c1f0e2fdfad" 00:10:36.159 ], 00:10:36.159 "product_name": "NVMe disk", 00:10:36.159 "block_size": 4096, 00:10:36.159 "num_blocks": 38912, 00:10:36.159 "uuid": "d323f6cf-745f-418f-8b63-7c1f0e2fdfad", 00:10:36.159 "numa_id": 0, 00:10:36.159 "assigned_rate_limits": { 00:10:36.159 "rw_ios_per_sec": 0, 00:10:36.159 "rw_mbytes_per_sec": 0, 00:10:36.159 "r_mbytes_per_sec": 0, 00:10:36.159 "w_mbytes_per_sec": 0 00:10:36.159 }, 00:10:36.159 "claimed": false, 00:10:36.159 "zoned": false, 00:10:36.159 "supported_io_types": { 00:10:36.159 "read": true, 00:10:36.159 "write": true, 00:10:36.159 "unmap": true, 00:10:36.159 "flush": true, 00:10:36.159 "reset": true, 00:10:36.159 "nvme_admin": true, 00:10:36.159 "nvme_io": true, 00:10:36.159 "nvme_io_md": false, 00:10:36.159 "write_zeroes": true, 00:10:36.159 "zcopy": false, 00:10:36.159 "get_zone_info": false, 00:10:36.159 "zone_management": false, 00:10:36.159 "zone_append": false, 00:10:36.159 "compare": true, 00:10:36.159 "compare_and_write": true, 00:10:36.159 "abort": true, 00:10:36.159 "seek_hole": false, 00:10:36.159 "seek_data": false, 00:10:36.159 "copy": true, 00:10:36.159 "nvme_iov_md": false 00:10:36.159 }, 00:10:36.159 "memory_domains": [ 00:10:36.159 { 00:10:36.159 "dma_device_id": "system", 00:10:36.159 "dma_device_type": 1 00:10:36.159 } 00:10:36.159 ], 00:10:36.159 "driver_specific": { 00:10:36.159 "nvme": [ 00:10:36.159 { 00:10:36.159 "trid": { 00:10:36.159 "trtype": "TCP", 00:10:36.159 "adrfam": "IPv4", 00:10:36.159 "traddr": "10.0.0.2", 00:10:36.159 "trsvcid": "4420", 00:10:36.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:36.159 }, 00:10:36.159 "ctrlr_data": { 00:10:36.159 "cntlid": 1, 00:10:36.159 "vendor_id": "0x8086", 00:10:36.159 "model_number": "SPDK bdev Controller", 00:10:36.159 "serial_number": "SPDK0", 00:10:36.159 "firmware_revision": "25.01", 00:10:36.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:36.159 "oacs": { 00:10:36.159 "security": 0, 00:10:36.159 "format": 0, 00:10:36.159 "firmware": 0, 00:10:36.159 "ns_manage": 0 00:10:36.159 }, 00:10:36.159 "multi_ctrlr": true, 00:10:36.159 "ana_reporting": false 00:10:36.159 }, 00:10:36.159 "vs": { 00:10:36.159 "nvme_version": "1.3" 00:10:36.159 }, 00:10:36.159 "ns_data": { 00:10:36.159 "id": 1, 00:10:36.159 "can_share": true 00:10:36.159 } 00:10:36.159 } 00:10:36.159 ], 00:10:36.159 "mp_policy": "active_passive" 00:10:36.159 } 00:10:36.159 } 00:10:36.159 ] 00:10:36.417 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=546738 00:10:36.417 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:36.417 12:25:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:36.417 Running I/O for 10 seconds... 00:10:37.351 Latency(us) 00:10:37.351 [2024-11-05T11:25:06.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.352 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:10:37.352 [2024-11-05T11:25:06.590Z] =================================================================================================================== 00:10:37.352 [2024-11-05T11:25:06.590Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:10:37.352 00:10:38.285 12:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:38.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.285 Nvme0n1 : 2.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:10:38.285 [2024-11-05T11:25:07.523Z] =================================================================================================================== 00:10:38.285 [2024-11-05T11:25:07.523Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:10:38.285 00:10:38.543 true 00:10:38.543 12:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:38.543 12:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:38.801 12:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:38.801 12:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:38.801 12:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 546738 00:10:39.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.367 Nvme0n1 : 3.00 15389.67 60.12 0.00 0.00 0.00 0.00 0.00 00:10:39.367 [2024-11-05T11:25:08.605Z] =================================================================================================================== 00:10:39.367 [2024-11-05T11:25:08.605Z] Total : 15389.67 60.12 0.00 0.00 0.00 0.00 0.00 00:10:39.367 00:10:40.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.301 Nvme0n1 : 4.00 15495.00 60.53 0.00 0.00 0.00 0.00 0.00 00:10:40.301 [2024-11-05T11:25:09.539Z] =================================================================================================================== 00:10:40.301 [2024-11-05T11:25:09.539Z] Total : 15495.00 60.53 0.00 0.00 0.00 0.00 0.00 00:10:40.301 00:10:41.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.674 Nvme0n1 : 5.00 15545.60 60.73 0.00 0.00 0.00 0.00 0.00 00:10:41.674 [2024-11-05T11:25:10.912Z] =================================================================================================================== 00:10:41.674 [2024-11-05T11:25:10.912Z] Total : 15545.60 60.73 0.00 0.00 0.00 0.00 0.00 00:10:41.674 00:10:42.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.606 Nvme0n1 : 6.00 15600.50 60.94 0.00 0.00 0.00 0.00 0.00 00:10:42.606 [2024-11-05T11:25:11.844Z] =================================================================================================================== 00:10:42.606 [2024-11-05T11:25:11.844Z] Total : 15600.50 60.94 0.00 0.00 0.00 0.00 0.00 00:10:42.606 00:10:43.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.540 Nvme0n1 : 7.00 15639.71 61.09 0.00 0.00 0.00 0.00 0.00 00:10:43.540 [2024-11-05T11:25:12.778Z] =================================================================================================================== 00:10:43.540 [2024-11-05T11:25:12.778Z] Total : 15639.71 61.09 0.00 0.00 0.00 0.00 0.00 00:10:43.540 00:10:44.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.473 Nvme0n1 : 8.00 15685.00 61.27 0.00 0.00 0.00 0.00 0.00 00:10:44.473 [2024-11-05T11:25:13.711Z] =================================================================================================================== 00:10:44.473 [2024-11-05T11:25:13.711Z] Total : 15685.00 61.27 0.00 0.00 0.00 0.00 0.00 00:10:44.473 00:10:45.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.406 Nvme0n1 : 9.00 15706.11 61.35 0.00 0.00 0.00 0.00 0.00 00:10:45.406 [2024-11-05T11:25:14.644Z] =================================================================================================================== 00:10:45.406 [2024-11-05T11:25:14.644Z] Total : 15706.11 61.35 0.00 0.00 0.00 0.00 0.00 00:10:45.406 00:10:46.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.339 Nvme0n1 : 10.00 15735.70 61.47 0.00 0.00 0.00 0.00 0.00 00:10:46.339 [2024-11-05T11:25:15.577Z] =================================================================================================================== 00:10:46.339 [2024-11-05T11:25:15.577Z] Total : 15735.70 61.47 0.00 0.00 0.00 0.00 0.00 00:10:46.339 00:10:46.339 00:10:46.339 Latency(us) 00:10:46.339 [2024-11-05T11:25:15.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.339 Nvme0n1 : 10.00 15740.89 61.49 0.00 0.00 8127.31 2160.26 15437.37 00:10:46.339 [2024-11-05T11:25:15.577Z] =================================================================================================================== 00:10:46.339 [2024-11-05T11:25:15.577Z] Total : 15740.89 61.49 0.00 0.00 8127.31 2160.26 15437.37 00:10:46.339 { 00:10:46.339 "results": [ 00:10:46.339 { 00:10:46.339 "job": "Nvme0n1", 00:10:46.339 "core_mask": "0x2", 00:10:46.339 "workload": "randwrite", 00:10:46.339 "status": "finished", 00:10:46.339 "queue_depth": 128, 00:10:46.339 "io_size": 4096, 00:10:46.339 "runtime": 10.004834, 00:10:46.339 "iops": 15740.890853361485, 00:10:46.339 "mibps": 61.4878548959433, 00:10:46.339 "io_failed": 0, 00:10:46.339 "io_timeout": 0, 00:10:46.339 "avg_latency_us": 8127.313159277957, 00:10:46.339 "min_latency_us": 2160.260740740741, 00:10:46.339 "max_latency_us": 15437.368888888888 00:10:46.339 } 00:10:46.339 ], 00:10:46.339 "core_count": 1 00:10:46.339 } 00:10:46.339 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 546602 00:10:46.339 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 546602 ']' 00:10:46.339 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 546602 00:10:46.339 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:10:46.339 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:46.339 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 546602 00:10:46.596 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:46.596 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:46.596 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 546602' 00:10:46.596 killing process with pid 546602 00:10:46.596 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 546602 00:10:46.596 Received shutdown signal, test time was about 10.000000 seconds 00:10:46.596 00:10:46.596 Latency(us) 00:10:46.596 [2024-11-05T11:25:15.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.596 [2024-11-05T11:25:15.834Z] =================================================================================================================== 00:10:46.596 [2024-11-05T11:25:15.834Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:46.596 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 546602 00:10:46.596 12:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:46.854 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:47.112 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:47.112 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:47.370 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:47.370 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:47.370 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:47.629 [2024-11-05 12:25:16.849733] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:47.886 12:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:48.144 request: 00:10:48.144 { 00:10:48.144 "uuid": "e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f", 00:10:48.144 "method": "bdev_lvol_get_lvstores", 00:10:48.144 "req_id": 1 00:10:48.144 } 00:10:48.144 Got JSON-RPC error response 00:10:48.144 response: 00:10:48.144 { 00:10:48.144 "code": -19, 00:10:48.144 "message": "No such device" 00:10:48.144 } 00:10:48.144 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:48.144 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:48.144 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:48.144 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:48.144 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.402 aio_bdev 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d323f6cf-745f-418f-8b63-7c1f0e2fdfad 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=d323f6cf-745f-418f-8b63-7c1f0e2fdfad 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:48.402 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:48.660 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d323f6cf-745f-418f-8b63-7c1f0e2fdfad -t 2000 00:10:48.917 [ 00:10:48.917 { 00:10:48.917 "name": "d323f6cf-745f-418f-8b63-7c1f0e2fdfad", 00:10:48.917 "aliases": [ 00:10:48.917 "lvs/lvol" 00:10:48.917 ], 00:10:48.917 "product_name": "Logical Volume", 00:10:48.917 "block_size": 4096, 00:10:48.917 "num_blocks": 38912, 00:10:48.917 "uuid": "d323f6cf-745f-418f-8b63-7c1f0e2fdfad", 00:10:48.917 "assigned_rate_limits": { 00:10:48.917 "rw_ios_per_sec": 0, 00:10:48.917 "rw_mbytes_per_sec": 0, 00:10:48.918 "r_mbytes_per_sec": 0, 00:10:48.918 "w_mbytes_per_sec": 0 00:10:48.918 }, 00:10:48.918 "claimed": false, 00:10:48.918 "zoned": false, 00:10:48.918 "supported_io_types": { 00:10:48.918 "read": true, 00:10:48.918 "write": true, 00:10:48.918 "unmap": true, 00:10:48.918 "flush": false, 00:10:48.918 "reset": true, 00:10:48.918 "nvme_admin": false, 00:10:48.918 "nvme_io": false, 00:10:48.918 "nvme_io_md": false, 00:10:48.918 "write_zeroes": true, 00:10:48.918 "zcopy": false, 00:10:48.918 "get_zone_info": false, 00:10:48.918 "zone_management": false, 00:10:48.918 "zone_append": false, 00:10:48.918 "compare": false, 00:10:48.918 "compare_and_write": false, 00:10:48.918 "abort": false, 00:10:48.918 "seek_hole": true, 00:10:48.918 "seek_data": true, 00:10:48.918 "copy": false, 00:10:48.918 "nvme_iov_md": false 00:10:48.918 }, 00:10:48.918 "driver_specific": { 00:10:48.918 "lvol": { 00:10:48.918 "lvol_store_uuid": "e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f", 00:10:48.918 "base_bdev": "aio_bdev", 00:10:48.918 "thin_provision": false, 00:10:48.918 "num_allocated_clusters": 38, 00:10:48.918 "snapshot": false, 00:10:48.918 "clone": false, 00:10:48.918 "esnap_clone": false 00:10:48.918 } 00:10:48.918 } 00:10:48.918 } 00:10:48.918 ] 00:10:48.918 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:10:48.918 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:48.918 12:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:49.176 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:49.176 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:49.176 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:49.433 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:49.433 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d323f6cf-745f-418f-8b63-7c1f0e2fdfad 00:10:49.692 12:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1f12a98-9ed9-411f-8dd4-2fb3f59ec83f 00:10:49.950 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:50.208 00:10:50.208 real 0m17.733s 00:10:50.208 user 0m17.183s 00:10:50.208 sys 0m1.943s 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:50.208 ************************************ 00:10:50.208 END TEST lvs_grow_clean 00:10:50.208 ************************************ 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:50.208 ************************************ 00:10:50.208 START TEST lvs_grow_dirty 00:10:50.208 ************************************ 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:50.208 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:50.466 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:50.466 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:50.724 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=282f4a7c-7d16-4874-b53a-d816c58b56e7 00:10:50.724 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:10:50.724 12:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:51.290 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:51.290 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:51.290 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 lvol 150 00:10:51.290 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:10:51.290 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:51.290 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:51.547 [2024-11-05 12:25:20.784431] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:51.547 [2024-11-05 12:25:20.784511] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:51.804 true 00:10:51.804 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:10:51.804 12:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:52.062 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:52.062 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:52.319 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:10:52.577 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:52.835 [2024-11-05 12:25:21.867783] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.835 12:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=548680 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 548680 /var/tmp/bdevperf.sock 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 548680 ']' 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:53.093 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:53.093 [2024-11-05 12:25:22.195414] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:10:53.093 [2024-11-05 12:25:22.195490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548680 ] 00:10:53.093 [2024-11-05 12:25:22.262921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.093 [2024-11-05 12:25:22.308244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.352 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:53.352 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:10:53.352 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:53.612 Nvme0n1 00:10:53.612 12:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:53.869 [ 00:10:53.869 { 00:10:53.869 "name": "Nvme0n1", 00:10:53.869 "aliases": [ 00:10:53.869 "93fee5c8-ec5e-4a87-b7b8-80b6542f874b" 00:10:53.869 ], 00:10:53.869 "product_name": "NVMe disk", 00:10:53.870 "block_size": 4096, 00:10:53.870 "num_blocks": 38912, 00:10:53.870 "uuid": "93fee5c8-ec5e-4a87-b7b8-80b6542f874b", 00:10:53.870 "numa_id": 0, 00:10:53.870 "assigned_rate_limits": { 00:10:53.870 "rw_ios_per_sec": 0, 00:10:53.870 "rw_mbytes_per_sec": 0, 00:10:53.870 "r_mbytes_per_sec": 0, 00:10:53.870 "w_mbytes_per_sec": 0 00:10:53.870 }, 00:10:53.870 "claimed": false, 00:10:53.870 "zoned": false, 00:10:53.870 "supported_io_types": { 00:10:53.870 "read": true, 00:10:53.870 "write": true, 00:10:53.870 "unmap": true, 00:10:53.870 "flush": true, 00:10:53.870 "reset": true, 00:10:53.870 "nvme_admin": true, 00:10:53.870 "nvme_io": true, 00:10:53.870 "nvme_io_md": false, 00:10:53.870 "write_zeroes": true, 00:10:53.870 "zcopy": false, 00:10:53.870 "get_zone_info": false, 00:10:53.870 "zone_management": false, 00:10:53.870 "zone_append": false, 00:10:53.870 "compare": true, 00:10:53.870 "compare_and_write": true, 00:10:53.870 "abort": true, 00:10:53.870 "seek_hole": false, 00:10:53.870 "seek_data": false, 00:10:53.870 "copy": true, 00:10:53.870 "nvme_iov_md": false 00:10:53.870 }, 00:10:53.870 "memory_domains": [ 00:10:53.870 { 00:10:53.870 "dma_device_id": "system", 00:10:53.870 "dma_device_type": 1 00:10:53.870 } 00:10:53.870 ], 00:10:53.870 "driver_specific": { 00:10:53.870 "nvme": [ 00:10:53.870 { 00:10:53.870 "trid": { 00:10:53.870 "trtype": "TCP", 00:10:53.870 "adrfam": "IPv4", 00:10:53.870 "traddr": "10.0.0.2", 00:10:53.870 "trsvcid": "4420", 00:10:53.870 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:53.870 }, 00:10:53.870 "ctrlr_data": { 00:10:53.870 "cntlid": 1, 00:10:53.870 "vendor_id": "0x8086", 00:10:53.870 "model_number": "SPDK bdev Controller", 00:10:53.870 "serial_number": "SPDK0", 00:10:53.870 "firmware_revision": "25.01", 00:10:53.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:53.870 "oacs": { 00:10:53.870 "security": 0, 00:10:53.870 "format": 0, 00:10:53.870 "firmware": 0, 00:10:53.870 "ns_manage": 0 00:10:53.870 }, 00:10:53.870 "multi_ctrlr": true, 00:10:53.870 "ana_reporting": false 00:10:53.870 }, 00:10:53.870 "vs": { 00:10:53.870 "nvme_version": "1.3" 00:10:53.870 }, 00:10:53.870 "ns_data": { 00:10:53.870 "id": 1, 00:10:53.870 "can_share": true 00:10:53.870 } 00:10:53.870 } 00:10:53.870 ], 00:10:53.870 "mp_policy": "active_passive" 00:10:53.870 } 00:10:53.870 } 00:10:53.870 ] 00:10:53.870 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=548804 00:10:53.870 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:53.870 12:25:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:54.128 Running I/O for 10 seconds... 00:10:55.061 Latency(us) 00:10:55.061 [2024-11-05T11:25:24.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.061 Nvme0n1 : 1.00 15021.00 58.68 0.00 0.00 0.00 0.00 0.00 00:10:55.061 [2024-11-05T11:25:24.299Z] =================================================================================================================== 00:10:55.061 [2024-11-05T11:25:24.299Z] Total : 15021.00 58.68 0.00 0.00 0.00 0.00 0.00 00:10:55.061 00:10:55.996 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:10:55.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.996 Nvme0n1 : 2.00 15226.50 59.48 0.00 0.00 0.00 0.00 0.00 00:10:55.996 [2024-11-05T11:25:25.234Z] =================================================================================================================== 00:10:55.996 [2024-11-05T11:25:25.234Z] Total : 15226.50 59.48 0.00 0.00 0.00 0.00 0.00 00:10:55.996 00:10:56.254 true 00:10:56.254 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:10:56.254 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:56.512 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:56.512 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:56.512 12:25:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 548804 00:10:57.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.078 Nvme0n1 : 3.00 15294.67 59.74 0.00 0.00 0.00 0.00 0.00 00:10:57.078 [2024-11-05T11:25:26.316Z] =================================================================================================================== 00:10:57.078 [2024-11-05T11:25:26.316Z] Total : 15294.67 59.74 0.00 0.00 0.00 0.00 0.00 00:10:57.078 00:10:58.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.012 Nvme0n1 : 4.00 15392.25 60.13 0.00 0.00 0.00 0.00 0.00 00:10:58.012 [2024-11-05T11:25:27.250Z] =================================================================================================================== 00:10:58.012 [2024-11-05T11:25:27.250Z] Total : 15392.25 60.13 0.00 0.00 0.00 0.00 0.00 00:10:58.012 00:10:58.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.946 Nvme0n1 : 5.00 15463.40 60.40 0.00 0.00 0.00 0.00 0.00 00:10:58.946 [2024-11-05T11:25:28.184Z] =================================================================================================================== 00:10:58.946 [2024-11-05T11:25:28.184Z] Total : 15463.40 60.40 0.00 0.00 0.00 0.00 0.00 00:10:58.946 00:11:00.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.319 Nvme0n1 : 6.00 15511.33 60.59 0.00 0.00 0.00 0.00 0.00 00:11:00.319 [2024-11-05T11:25:29.557Z] =================================================================================================================== 00:11:00.319 [2024-11-05T11:25:29.557Z] Total : 15511.33 60.59 0.00 0.00 0.00 0.00 0.00 00:11:00.319 00:11:01.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.251 Nvme0n1 : 7.00 15536.14 60.69 0.00 0.00 0.00 0.00 0.00 00:11:01.251 [2024-11-05T11:25:30.489Z] =================================================================================================================== 00:11:01.251 [2024-11-05T11:25:30.489Z] Total : 15536.14 60.69 0.00 0.00 0.00 0.00 0.00 00:11:01.251 00:11:02.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.184 Nvme0n1 : 8.00 15570.62 60.82 0.00 0.00 0.00 0.00 0.00 00:11:02.184 [2024-11-05T11:25:31.422Z] =================================================================================================================== 00:11:02.184 [2024-11-05T11:25:31.422Z] Total : 15570.62 60.82 0.00 0.00 0.00 0.00 0.00 00:11:02.184 00:11:03.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.116 Nvme0n1 : 9.00 15604.44 60.95 0.00 0.00 0.00 0.00 0.00 00:11:03.116 [2024-11-05T11:25:32.354Z] =================================================================================================================== 00:11:03.116 [2024-11-05T11:25:32.354Z] Total : 15604.44 60.95 0.00 0.00 0.00 0.00 0.00 00:11:03.116 00:11:04.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.050 Nvme0n1 : 10.00 15631.50 61.06 0.00 0.00 0.00 0.00 0.00 00:11:04.050 [2024-11-05T11:25:33.288Z] =================================================================================================================== 00:11:04.050 [2024-11-05T11:25:33.288Z] Total : 15631.50 61.06 0.00 0.00 0.00 0.00 0.00 00:11:04.050 00:11:04.050 00:11:04.050 Latency(us) 00:11:04.050 [2024-11-05T11:25:33.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.050 Nvme0n1 : 10.00 15638.96 61.09 0.00 0.00 8180.42 3155.44 15631.55 00:11:04.050 [2024-11-05T11:25:33.288Z] =================================================================================================================== 00:11:04.050 [2024-11-05T11:25:33.288Z] Total : 15638.96 61.09 0.00 0.00 8180.42 3155.44 15631.55 00:11:04.050 { 00:11:04.050 "results": [ 00:11:04.050 { 00:11:04.050 "job": "Nvme0n1", 00:11:04.050 "core_mask": "0x2", 00:11:04.050 "workload": "randwrite", 00:11:04.050 "status": "finished", 00:11:04.050 "queue_depth": 128, 00:11:04.050 "io_size": 4096, 00:11:04.050 "runtime": 10.003412, 00:11:04.050 "iops": 15638.963985488152, 00:11:04.050 "mibps": 61.08970306831309, 00:11:04.050 "io_failed": 0, 00:11:04.050 "io_timeout": 0, 00:11:04.050 "avg_latency_us": 8180.41798760926, 00:11:04.050 "min_latency_us": 3155.437037037037, 00:11:04.050 "max_latency_us": 15631.54962962963 00:11:04.050 } 00:11:04.050 ], 00:11:04.050 "core_count": 1 00:11:04.050 } 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 548680 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 548680 ']' 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 548680 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 548680 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 548680' 00:11:04.050 killing process with pid 548680 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 548680 00:11:04.050 Received shutdown signal, test time was about 10.000000 seconds 00:11:04.050 00:11:04.050 Latency(us) 00:11:04.050 [2024-11-05T11:25:33.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.050 [2024-11-05T11:25:33.288Z] =================================================================================================================== 00:11:04.050 [2024-11-05T11:25:33.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:04.050 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 548680 00:11:04.308 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:04.566 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:04.823 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:04.823 12:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 546148 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 546148 00:11:05.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 546148 Killed "${NVMF_APP[@]}" "$@" 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=550140 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 550140 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 550140 ']' 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:05.082 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:05.082 [2024-11-05 12:25:34.312484] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:05.082 [2024-11-05 12:25:34.312580] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.341 [2024-11-05 12:25:34.387288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.341 [2024-11-05 12:25:34.435686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.341 [2024-11-05 12:25:34.435761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.341 [2024-11-05 12:25:34.435774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.341 [2024-11-05 12:25:34.435785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.341 [2024-11-05 12:25:34.435799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.341 [2024-11-05 12:25:34.436419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.341 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:05.600 [2024-11-05 12:25:34.839675] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:05.600 [2024-11-05 12:25:34.839796] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:05.600 [2024-11-05 12:25:34.839857] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.857 12:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:06.114 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 93fee5c8-ec5e-4a87-b7b8-80b6542f874b -t 2000 00:11:06.371 [ 00:11:06.371 { 00:11:06.371 "name": "93fee5c8-ec5e-4a87-b7b8-80b6542f874b", 00:11:06.371 "aliases": [ 00:11:06.371 "lvs/lvol" 00:11:06.371 ], 00:11:06.371 "product_name": "Logical Volume", 00:11:06.371 "block_size": 4096, 00:11:06.371 "num_blocks": 38912, 00:11:06.371 "uuid": "93fee5c8-ec5e-4a87-b7b8-80b6542f874b", 00:11:06.371 "assigned_rate_limits": { 00:11:06.371 "rw_ios_per_sec": 0, 00:11:06.371 "rw_mbytes_per_sec": 0, 00:11:06.371 "r_mbytes_per_sec": 0, 00:11:06.371 "w_mbytes_per_sec": 0 00:11:06.371 }, 00:11:06.371 "claimed": false, 00:11:06.371 "zoned": false, 00:11:06.371 "supported_io_types": { 00:11:06.371 "read": true, 00:11:06.371 "write": true, 00:11:06.371 "unmap": true, 00:11:06.371 "flush": false, 00:11:06.371 "reset": true, 00:11:06.371 "nvme_admin": false, 00:11:06.371 "nvme_io": false, 00:11:06.371 "nvme_io_md": false, 00:11:06.371 "write_zeroes": true, 00:11:06.371 "zcopy": false, 00:11:06.371 "get_zone_info": false, 00:11:06.371 "zone_management": false, 00:11:06.371 "zone_append": false, 00:11:06.371 "compare": false, 00:11:06.371 "compare_and_write": false, 00:11:06.371 "abort": false, 00:11:06.371 "seek_hole": true, 00:11:06.371 "seek_data": true, 00:11:06.371 "copy": false, 00:11:06.371 "nvme_iov_md": false 00:11:06.371 }, 00:11:06.371 "driver_specific": { 00:11:06.371 "lvol": { 00:11:06.371 "lvol_store_uuid": "282f4a7c-7d16-4874-b53a-d816c58b56e7", 00:11:06.371 "base_bdev": "aio_bdev", 00:11:06.371 "thin_provision": false, 00:11:06.371 "num_allocated_clusters": 38, 00:11:06.371 "snapshot": false, 00:11:06.371 "clone": false, 00:11:06.371 "esnap_clone": false 00:11:06.371 } 00:11:06.371 } 00:11:06.371 } 00:11:06.371 ] 00:11:06.371 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:06.371 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:06.371 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:06.629 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:06.629 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:06.629 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:06.886 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:06.886 12:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:07.144 [2024-11-05 12:25:36.197387] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:07.144 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:07.402 request: 00:11:07.402 { 00:11:07.402 "uuid": "282f4a7c-7d16-4874-b53a-d816c58b56e7", 00:11:07.402 "method": "bdev_lvol_get_lvstores", 00:11:07.402 "req_id": 1 00:11:07.402 } 00:11:07.402 Got JSON-RPC error response 00:11:07.402 response: 00:11:07.402 { 00:11:07.402 "code": -19, 00:11:07.402 "message": "No such device" 00:11:07.402 } 00:11:07.402 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:07.402 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:07.402 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:07.402 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:07.402 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:07.661 aio_bdev 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:07.661 12:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:07.919 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 93fee5c8-ec5e-4a87-b7b8-80b6542f874b -t 2000 00:11:08.177 [ 00:11:08.177 { 00:11:08.177 "name": "93fee5c8-ec5e-4a87-b7b8-80b6542f874b", 00:11:08.177 "aliases": [ 00:11:08.177 "lvs/lvol" 00:11:08.177 ], 00:11:08.177 "product_name": "Logical Volume", 00:11:08.177 "block_size": 4096, 00:11:08.177 "num_blocks": 38912, 00:11:08.177 "uuid": "93fee5c8-ec5e-4a87-b7b8-80b6542f874b", 00:11:08.177 "assigned_rate_limits": { 00:11:08.177 "rw_ios_per_sec": 0, 00:11:08.177 "rw_mbytes_per_sec": 0, 00:11:08.177 "r_mbytes_per_sec": 0, 00:11:08.177 "w_mbytes_per_sec": 0 00:11:08.177 }, 00:11:08.177 "claimed": false, 00:11:08.177 "zoned": false, 00:11:08.177 "supported_io_types": { 00:11:08.177 "read": true, 00:11:08.177 "write": true, 00:11:08.177 "unmap": true, 00:11:08.177 "flush": false, 00:11:08.177 "reset": true, 00:11:08.177 "nvme_admin": false, 00:11:08.177 "nvme_io": false, 00:11:08.177 "nvme_io_md": false, 00:11:08.177 "write_zeroes": true, 00:11:08.177 "zcopy": false, 00:11:08.177 "get_zone_info": false, 00:11:08.177 "zone_management": false, 00:11:08.177 "zone_append": false, 00:11:08.177 "compare": false, 00:11:08.177 "compare_and_write": false, 00:11:08.177 "abort": false, 00:11:08.177 "seek_hole": true, 00:11:08.177 "seek_data": true, 00:11:08.177 "copy": false, 00:11:08.177 "nvme_iov_md": false 00:11:08.177 }, 00:11:08.177 "driver_specific": { 00:11:08.177 "lvol": { 00:11:08.177 "lvol_store_uuid": "282f4a7c-7d16-4874-b53a-d816c58b56e7", 00:11:08.177 "base_bdev": "aio_bdev", 00:11:08.177 "thin_provision": false, 00:11:08.177 "num_allocated_clusters": 38, 00:11:08.177 "snapshot": false, 00:11:08.177 "clone": false, 00:11:08.177 "esnap_clone": false 00:11:08.177 } 00:11:08.177 } 00:11:08.177 } 00:11:08.177 ] 00:11:08.177 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:08.177 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:08.177 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:08.434 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:08.434 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:08.434 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:08.691 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:08.691 12:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 93fee5c8-ec5e-4a87-b7b8-80b6542f874b 00:11:08.949 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 282f4a7c-7d16-4874-b53a-d816c58b56e7 00:11:09.207 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:09.772 00:11:09.772 real 0m19.346s 00:11:09.772 user 0m49.023s 00:11:09.772 sys 0m4.608s 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:09.772 ************************************ 00:11:09.772 END TEST lvs_grow_dirty 00:11:09.772 ************************************ 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:09.772 nvmf_trace.0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.772 rmmod nvme_tcp 00:11:09.772 rmmod nvme_fabrics 00:11:09.772 rmmod nvme_keyring 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 550140 ']' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 550140 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 550140 ']' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 550140 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 550140 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 550140' 00:11:09.772 killing process with pid 550140 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 550140 00:11:09.772 12:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 550140 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.030 12:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.940 00:11:11.940 real 0m42.696s 00:11:11.940 user 1m12.277s 00:11:11.940 sys 0m8.672s 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.940 ************************************ 00:11:11.940 END TEST nvmf_lvs_grow 00:11:11.940 ************************************ 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.940 ************************************ 00:11:11.940 START TEST nvmf_bdev_io_wait 00:11:11.940 ************************************ 00:11:11.940 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:12.199 * Looking for test storage... 00:11:12.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:12.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.199 --rc genhtml_branch_coverage=1 00:11:12.199 --rc genhtml_function_coverage=1 00:11:12.199 --rc genhtml_legend=1 00:11:12.199 --rc geninfo_all_blocks=1 00:11:12.199 --rc geninfo_unexecuted_blocks=1 00:11:12.199 00:11:12.199 ' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:12.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.199 --rc genhtml_branch_coverage=1 00:11:12.199 --rc genhtml_function_coverage=1 00:11:12.199 --rc genhtml_legend=1 00:11:12.199 --rc geninfo_all_blocks=1 00:11:12.199 --rc geninfo_unexecuted_blocks=1 00:11:12.199 00:11:12.199 ' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:12.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.199 --rc genhtml_branch_coverage=1 00:11:12.199 --rc genhtml_function_coverage=1 00:11:12.199 --rc genhtml_legend=1 00:11:12.199 --rc geninfo_all_blocks=1 00:11:12.199 --rc geninfo_unexecuted_blocks=1 00:11:12.199 00:11:12.199 ' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:12.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.199 --rc genhtml_branch_coverage=1 00:11:12.199 --rc genhtml_function_coverage=1 00:11:12.199 --rc genhtml_legend=1 00:11:12.199 --rc geninfo_all_blocks=1 00:11:12.199 --rc geninfo_unexecuted_blocks=1 00:11:12.199 00:11:12.199 ' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.199 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.200 12:25:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:14.735 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.735 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:14.735 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:14.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:14.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:11:14.736 00:11:14.736 --- 10.0.0.2 ping statistics --- 00:11:14.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.736 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:11:14.736 00:11:14.736 --- 10.0.0.1 ping statistics --- 00:11:14.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.736 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=552679 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 552679 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 552679 ']' 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 [2024-11-05 12:25:43.583638] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:14.736 [2024-11-05 12:25:43.583722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.736 [2024-11-05 12:25:43.662590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.736 [2024-11-05 12:25:43.711429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.736 [2024-11-05 12:25:43.711489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.736 [2024-11-05 12:25:43.711502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.736 [2024-11-05 12:25:43.711512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.736 [2024-11-05 12:25:43.711522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.736 [2024-11-05 12:25:43.713127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.736 [2024-11-05 12:25:43.713178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.736 [2024-11-05 12:25:43.713245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.736 [2024-11-05 12:25:43.713249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:14.736 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.737 [2024-11-05 12:25:43.930838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.737 Malloc0 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.737 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:14.995 [2024-11-05 12:25:43.983934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=552826 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=552828 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:14.995 { 00:11:14.995 "params": { 00:11:14.995 "name": "Nvme$subsystem", 00:11:14.995 "trtype": "$TEST_TRANSPORT", 00:11:14.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.995 "adrfam": "ipv4", 00:11:14.995 "trsvcid": "$NVMF_PORT", 00:11:14.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.995 "hdgst": ${hdgst:-false}, 00:11:14.995 "ddgst": ${ddgst:-false} 00:11:14.995 }, 00:11:14.995 "method": "bdev_nvme_attach_controller" 00:11:14.995 } 00:11:14.995 EOF 00:11:14.995 )") 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=552830 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:14.995 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:14.995 { 00:11:14.995 "params": { 00:11:14.995 "name": "Nvme$subsystem", 00:11:14.995 "trtype": "$TEST_TRANSPORT", 00:11:14.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "$NVMF_PORT", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.996 "hdgst": ${hdgst:-false}, 00:11:14.996 "ddgst": ${ddgst:-false} 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 } 00:11:14.996 EOF 00:11:14.996 )") 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=552833 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:14.996 { 00:11:14.996 "params": { 00:11:14.996 "name": "Nvme$subsystem", 00:11:14.996 "trtype": "$TEST_TRANSPORT", 00:11:14.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "$NVMF_PORT", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.996 "hdgst": ${hdgst:-false}, 00:11:14.996 "ddgst": ${ddgst:-false} 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 } 00:11:14.996 EOF 00:11:14.996 )") 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:14.996 { 00:11:14.996 "params": { 00:11:14.996 "name": "Nvme$subsystem", 00:11:14.996 "trtype": "$TEST_TRANSPORT", 00:11:14.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "$NVMF_PORT", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.996 "hdgst": ${hdgst:-false}, 00:11:14.996 "ddgst": ${ddgst:-false} 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 } 00:11:14.996 EOF 00:11:14.996 )") 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 552826 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:14.996 "params": { 00:11:14.996 "name": "Nvme1", 00:11:14.996 "trtype": "tcp", 00:11:14.996 "traddr": "10.0.0.2", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "4420", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.996 "hdgst": false, 00:11:14.996 "ddgst": false 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 }' 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:14.996 "params": { 00:11:14.996 "name": "Nvme1", 00:11:14.996 "trtype": "tcp", 00:11:14.996 "traddr": "10.0.0.2", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "4420", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.996 "hdgst": false, 00:11:14.996 "ddgst": false 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 }' 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:14.996 "params": { 00:11:14.996 "name": "Nvme1", 00:11:14.996 "trtype": "tcp", 00:11:14.996 "traddr": "10.0.0.2", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "4420", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.996 "hdgst": false, 00:11:14.996 "ddgst": false 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 }' 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:14.996 12:25:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:14.996 "params": { 00:11:14.996 "name": "Nvme1", 00:11:14.996 "trtype": "tcp", 00:11:14.996 "traddr": "10.0.0.2", 00:11:14.996 "adrfam": "ipv4", 00:11:14.996 "trsvcid": "4420", 00:11:14.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.996 "hdgst": false, 00:11:14.996 "ddgst": false 00:11:14.996 }, 00:11:14.996 "method": "bdev_nvme_attach_controller" 00:11:14.996 }' 00:11:14.996 [2024-11-05 12:25:44.033237] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:14.996 [2024-11-05 12:25:44.033237] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:14.996 [2024-11-05 12:25:44.033334] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-05 12:25:44.033334] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:14.996 --proc-type=auto ] 00:11:14.996 [2024-11-05 12:25:44.035373] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:14.996 [2024-11-05 12:25:44.035374] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:14.996 [2024-11-05 12:25:44.035457] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-05 12:25:44.035457] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:14.996 --proc-type=auto ] 00:11:14.996 [2024-11-05 12:25:44.221399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.254 [2024-11-05 12:25:44.263241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:15.254 [2024-11-05 12:25:44.322622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.254 [2024-11-05 12:25:44.367032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:15.254 [2024-11-05 12:25:44.397575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.254 [2024-11-05 12:25:44.435001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:15.254 [2024-11-05 12:25:44.471504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.512 [2024-11-05 12:25:44.509643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:15.512 Running I/O for 1 seconds... 00:11:15.512 Running I/O for 1 seconds... 00:11:15.512 Running I/O for 1 seconds... 00:11:15.771 Running I/O for 1 seconds... 00:11:16.704 10085.00 IOPS, 39.39 MiB/s 00:11:16.704 Latency(us) 00:11:16.704 [2024-11-05T11:25:45.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.704 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:16.704 Nvme1n1 : 1.01 10150.90 39.65 0.00 0.00 12562.22 5437.06 18641.35 00:11:16.704 [2024-11-05T11:25:45.942Z] =================================================================================================================== 00:11:16.704 [2024-11-05T11:25:45.942Z] Total : 10150.90 39.65 0.00 0.00 12562.22 5437.06 18641.35 00:11:16.704 8378.00 IOPS, 32.73 MiB/s 00:11:16.704 Latency(us) 00:11:16.704 [2024-11-05T11:25:45.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.704 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:16.704 Nvme1n1 : 1.01 8434.89 32.95 0.00 0.00 15101.64 7330.32 25826.04 00:11:16.704 [2024-11-05T11:25:45.942Z] =================================================================================================================== 00:11:16.704 [2024-11-05T11:25:45.942Z] Total : 8434.89 32.95 0.00 0.00 15101.64 7330.32 25826.04 00:11:16.704 7338.00 IOPS, 28.66 MiB/s 00:11:16.704 Latency(us) 00:11:16.704 [2024-11-05T11:25:45.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.704 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:16.704 Nvme1n1 : 1.01 7406.16 28.93 0.00 0.00 17204.66 6941.96 30680.56 00:11:16.704 [2024-11-05T11:25:45.942Z] =================================================================================================================== 00:11:16.704 [2024-11-05T11:25:45.942Z] Total : 7406.16 28.93 0.00 0.00 17204.66 6941.96 30680.56 00:11:16.704 195880.00 IOPS, 765.16 MiB/s 00:11:16.704 Latency(us) 00:11:16.704 [2024-11-05T11:25:45.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:16.704 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:16.704 Nvme1n1 : 1.00 195511.44 763.72 0.00 0.00 651.17 286.72 1868.99 00:11:16.704 [2024-11-05T11:25:45.942Z] =================================================================================================================== 00:11:16.704 [2024-11-05T11:25:45.942Z] Total : 195511.44 763.72 0.00 0.00 651.17 286.72 1868.99 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 552828 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 552830 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 552833 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:16.704 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.705 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:16.705 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.705 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.705 rmmod nvme_tcp 00:11:16.963 rmmod nvme_fabrics 00:11:16.963 rmmod nvme_keyring 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 552679 ']' 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 552679 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 552679 ']' 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 552679 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.963 12:25:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 552679 00:11:16.963 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.963 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.963 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 552679' 00:11:16.963 killing process with pid 552679 00:11:16.963 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 552679 00:11:16.963 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 552679 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.223 12:25:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.132 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.132 00:11:19.132 real 0m7.081s 00:11:19.132 user 0m15.266s 00:11:19.132 sys 0m3.696s 00:11:19.132 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.132 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:19.132 ************************************ 00:11:19.132 END TEST nvmf_bdev_io_wait 00:11:19.132 ************************************ 00:11:19.132 12:25:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:19.133 ************************************ 00:11:19.133 START TEST nvmf_queue_depth 00:11:19.133 ************************************ 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:19.133 * Looking for test storage... 00:11:19.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:19.133 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.392 --rc genhtml_branch_coverage=1 00:11:19.392 --rc genhtml_function_coverage=1 00:11:19.392 --rc genhtml_legend=1 00:11:19.392 --rc geninfo_all_blocks=1 00:11:19.392 --rc geninfo_unexecuted_blocks=1 00:11:19.392 00:11:19.392 ' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.392 --rc genhtml_branch_coverage=1 00:11:19.392 --rc genhtml_function_coverage=1 00:11:19.392 --rc genhtml_legend=1 00:11:19.392 --rc geninfo_all_blocks=1 00:11:19.392 --rc geninfo_unexecuted_blocks=1 00:11:19.392 00:11:19.392 ' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.392 --rc genhtml_branch_coverage=1 00:11:19.392 --rc genhtml_function_coverage=1 00:11:19.392 --rc genhtml_legend=1 00:11:19.392 --rc geninfo_all_blocks=1 00:11:19.392 --rc geninfo_unexecuted_blocks=1 00:11:19.392 00:11:19.392 ' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:19.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.392 --rc genhtml_branch_coverage=1 00:11:19.392 --rc genhtml_function_coverage=1 00:11:19.392 --rc genhtml_legend=1 00:11:19.392 --rc geninfo_all_blocks=1 00:11:19.392 --rc geninfo_unexecuted_blocks=1 00:11:19.392 00:11:19.392 ' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.392 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.393 12:25:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.930 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:11:21.931 00:11:21.931 --- 10.0.0.2 ping statistics --- 00:11:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.931 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:21.931 00:11:21.931 --- 10.0.0.1 ping statistics --- 00:11:21.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.931 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=555054 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 555054 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 555054 ']' 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:21.931 12:25:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 [2024-11-05 12:25:50.820818] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:21.931 [2024-11-05 12:25:50.820945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.931 [2024-11-05 12:25:50.901306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.931 [2024-11-05 12:25:50.945975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.931 [2024-11-05 12:25:50.946028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.931 [2024-11-05 12:25:50.946053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.931 [2024-11-05 12:25:50.946073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.931 [2024-11-05 12:25:50.946083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.931 [2024-11-05 12:25:50.946713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.931 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.931 [2024-11-05 12:25:51.090634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.932 Malloc0 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:21.932 [2024-11-05 12:25:51.138744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=555084 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 555084 /var/tmp/bdevperf.sock 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 555084 ']' 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:21.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:21.932 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.190 [2024-11-05 12:25:51.186324] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:22.190 [2024-11-05 12:25:51.186389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555084 ] 00:11:22.190 [2024-11-05 12:25:51.252784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.190 [2024-11-05 12:25:51.297778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.190 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:22.190 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:22.190 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:22.190 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.190 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 NVMe0n1 00:11:22.448 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.448 12:25:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:22.448 Running I/O for 10 seconds... 00:11:24.756 8192.00 IOPS, 32.00 MiB/s [2024-11-05T11:25:55.003Z] 8521.00 IOPS, 33.29 MiB/s [2024-11-05T11:25:55.980Z] 8533.33 IOPS, 33.33 MiB/s [2024-11-05T11:25:56.913Z] 8569.50 IOPS, 33.47 MiB/s [2024-11-05T11:25:57.847Z] 8599.20 IOPS, 33.59 MiB/s [2024-11-05T11:25:58.780Z] 8635.83 IOPS, 33.73 MiB/s [2024-11-05T11:25:59.714Z] 8630.86 IOPS, 33.71 MiB/s [2024-11-05T11:26:00.661Z] 8686.62 IOPS, 33.93 MiB/s [2024-11-05T11:26:02.038Z] 8655.89 IOPS, 33.81 MiB/s [2024-11-05T11:26:02.038Z] 8699.50 IOPS, 33.98 MiB/s 00:11:32.800 Latency(us) 00:11:32.800 [2024-11-05T11:26:02.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.800 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:32.800 Verification LBA range: start 0x0 length 0x4000 00:11:32.800 NVMe0n1 : 10.09 8718.70 34.06 0.00 0.00 116995.77 21262.79 69128.34 00:11:32.800 [2024-11-05T11:26:02.038Z] =================================================================================================================== 00:11:32.800 [2024-11-05T11:26:02.038Z] Total : 8718.70 34.06 0.00 0.00 116995.77 21262.79 69128.34 00:11:32.800 { 00:11:32.800 "results": [ 00:11:32.800 { 00:11:32.800 "job": "NVMe0n1", 00:11:32.800 "core_mask": "0x1", 00:11:32.800 "workload": "verify", 00:11:32.800 "status": "finished", 00:11:32.800 "verify_range": { 00:11:32.800 "start": 0, 00:11:32.800 "length": 16384 00:11:32.800 }, 00:11:32.800 "queue_depth": 1024, 00:11:32.800 "io_size": 4096, 00:11:32.800 "runtime": 10.094395, 00:11:32.800 "iops": 8718.699832927085, 00:11:32.800 "mibps": 34.057421222371424, 00:11:32.800 "io_failed": 0, 00:11:32.800 "io_timeout": 0, 00:11:32.800 "avg_latency_us": 116995.76727991347, 00:11:32.800 "min_latency_us": 21262.79111111111, 00:11:32.800 "max_latency_us": 69128.34370370371 00:11:32.800 } 00:11:32.800 ], 00:11:32.800 "core_count": 1 00:11:32.800 } 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 555084 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 555084 ']' 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 555084 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 555084 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 555084' 00:11:32.800 killing process with pid 555084 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 555084 00:11:32.800 Received shutdown signal, test time was about 10.000000 seconds 00:11:32.800 00:11:32.800 Latency(us) 00:11:32.800 [2024-11-05T11:26:02.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:32.800 [2024-11-05T11:26:02.038Z] =================================================================================================================== 00:11:32.800 [2024-11-05T11:26:02.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 555084 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.800 12:26:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.800 rmmod nvme_tcp 00:11:32.800 rmmod nvme_fabrics 00:11:32.800 rmmod nvme_keyring 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 555054 ']' 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 555054 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 555054 ']' 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 555054 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:32.800 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 555054 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 555054' 00:11:33.060 killing process with pid 555054 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 555054 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 555054 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.060 12:26:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.601 00:11:35.601 real 0m16.011s 00:11:35.601 user 0m22.487s 00:11:35.601 sys 0m3.046s 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:35.601 ************************************ 00:11:35.601 END TEST nvmf_queue_depth 00:11:35.601 ************************************ 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:35.601 ************************************ 00:11:35.601 START TEST nvmf_target_multipath 00:11:35.601 ************************************ 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:35.601 * Looking for test storage... 00:11:35.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.601 --rc genhtml_branch_coverage=1 00:11:35.601 --rc genhtml_function_coverage=1 00:11:35.601 --rc genhtml_legend=1 00:11:35.601 --rc geninfo_all_blocks=1 00:11:35.601 --rc geninfo_unexecuted_blocks=1 00:11:35.601 00:11:35.601 ' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.601 --rc genhtml_branch_coverage=1 00:11:35.601 --rc genhtml_function_coverage=1 00:11:35.601 --rc genhtml_legend=1 00:11:35.601 --rc geninfo_all_blocks=1 00:11:35.601 --rc geninfo_unexecuted_blocks=1 00:11:35.601 00:11:35.601 ' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.601 --rc genhtml_branch_coverage=1 00:11:35.601 --rc genhtml_function_coverage=1 00:11:35.601 --rc genhtml_legend=1 00:11:35.601 --rc geninfo_all_blocks=1 00:11:35.601 --rc geninfo_unexecuted_blocks=1 00:11:35.601 00:11:35.601 ' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.601 --rc genhtml_branch_coverage=1 00:11:35.601 --rc genhtml_function_coverage=1 00:11:35.601 --rc genhtml_legend=1 00:11:35.601 --rc geninfo_all_blocks=1 00:11:35.601 --rc geninfo_unexecuted_blocks=1 00:11:35.601 00:11:35.601 ' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.601 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.602 12:26:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:37.507 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:37.507 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.507 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:37.508 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:37.508 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.508 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:11:37.767 00:11:37.767 --- 10.0.0.2 ping statistics --- 00:11:37.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.767 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:11:37.767 00:11:37.767 --- 10.0.0.1 ping statistics --- 00:11:37.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.767 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:37.767 only one NIC for nvmf test 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:37.767 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.768 rmmod nvme_tcp 00:11:37.768 rmmod nvme_fabrics 00:11:37.768 rmmod nvme_keyring 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.768 12:26:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.308 00:11:40.308 real 0m4.608s 00:11:40.308 user 0m0.944s 00:11:40.308 sys 0m1.679s 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.308 12:26:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:40.308 ************************************ 00:11:40.308 END TEST nvmf_target_multipath 00:11:40.308 ************************************ 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.308 ************************************ 00:11:40.308 START TEST nvmf_zcopy 00:11:40.308 ************************************ 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:40.308 * Looking for test storage... 00:11:40.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.308 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.309 --rc genhtml_branch_coverage=1 00:11:40.309 --rc genhtml_function_coverage=1 00:11:40.309 --rc genhtml_legend=1 00:11:40.309 --rc geninfo_all_blocks=1 00:11:40.309 --rc geninfo_unexecuted_blocks=1 00:11:40.309 00:11:40.309 ' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.309 --rc genhtml_branch_coverage=1 00:11:40.309 --rc genhtml_function_coverage=1 00:11:40.309 --rc genhtml_legend=1 00:11:40.309 --rc geninfo_all_blocks=1 00:11:40.309 --rc geninfo_unexecuted_blocks=1 00:11:40.309 00:11:40.309 ' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.309 --rc genhtml_branch_coverage=1 00:11:40.309 --rc genhtml_function_coverage=1 00:11:40.309 --rc genhtml_legend=1 00:11:40.309 --rc geninfo_all_blocks=1 00:11:40.309 --rc geninfo_unexecuted_blocks=1 00:11:40.309 00:11:40.309 ' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.309 --rc genhtml_branch_coverage=1 00:11:40.309 --rc genhtml_function_coverage=1 00:11:40.309 --rc genhtml_legend=1 00:11:40.309 --rc geninfo_all_blocks=1 00:11:40.309 --rc geninfo_unexecuted_blocks=1 00:11:40.309 00:11:40.309 ' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.309 12:26:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:42.211 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:42.211 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.211 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:42.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:42.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.212 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:11:42.470 00:11:42.470 --- 10.0.0.2 ping statistics --- 00:11:42.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.470 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:11:42.470 00:11:42.470 --- 10.0.0.1 ping statistics --- 00:11:42.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.470 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=560301 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 560301 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 560301 ']' 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.470 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.470 [2024-11-05 12:26:11.621962] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:42.471 [2024-11-05 12:26:11.622037] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.471 [2024-11-05 12:26:11.692912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.729 [2024-11-05 12:26:11.736512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.729 [2024-11-05 12:26:11.736574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.729 [2024-11-05 12:26:11.736595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.729 [2024-11-05 12:26:11.736611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.729 [2024-11-05 12:26:11.736625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.729 [2024-11-05 12:26:11.737296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 [2024-11-05 12:26:11.867574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 [2024-11-05 12:26:11.883782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 malloc0 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:42.729 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:42.730 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:42.730 { 00:11:42.730 "params": { 00:11:42.730 "name": "Nvme$subsystem", 00:11:42.730 "trtype": "$TEST_TRANSPORT", 00:11:42.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.730 "adrfam": "ipv4", 00:11:42.730 "trsvcid": "$NVMF_PORT", 00:11:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.730 "hdgst": ${hdgst:-false}, 00:11:42.730 "ddgst": ${ddgst:-false} 00:11:42.730 }, 00:11:42.730 "method": "bdev_nvme_attach_controller" 00:11:42.730 } 00:11:42.730 EOF 00:11:42.730 )") 00:11:42.730 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:42.730 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:42.730 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:42.730 12:26:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:42.730 "params": { 00:11:42.730 "name": "Nvme1", 00:11:42.730 "trtype": "tcp", 00:11:42.730 "traddr": "10.0.0.2", 00:11:42.730 "adrfam": "ipv4", 00:11:42.730 "trsvcid": "4420", 00:11:42.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.730 "hdgst": false, 00:11:42.730 "ddgst": false 00:11:42.730 }, 00:11:42.730 "method": "bdev_nvme_attach_controller" 00:11:42.730 }' 00:11:42.730 [2024-11-05 12:26:11.960967] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:42.730 [2024-11-05 12:26:11.961046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid560329 ] 00:11:42.988 [2024-11-05 12:26:12.032740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.988 [2024-11-05 12:26:12.078641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.246 Running I/O for 10 seconds... 00:11:45.113 5936.00 IOPS, 46.38 MiB/s [2024-11-05T11:26:15.724Z] 5994.50 IOPS, 46.83 MiB/s [2024-11-05T11:26:16.658Z] 6029.00 IOPS, 47.10 MiB/s [2024-11-05T11:26:17.591Z] 6041.75 IOPS, 47.20 MiB/s [2024-11-05T11:26:18.526Z] 6044.20 IOPS, 47.22 MiB/s [2024-11-05T11:26:19.460Z] 6049.50 IOPS, 47.26 MiB/s [2024-11-05T11:26:20.393Z] 6056.71 IOPS, 47.32 MiB/s [2024-11-05T11:26:21.766Z] 6040.12 IOPS, 47.19 MiB/s [2024-11-05T11:26:22.700Z] 6038.67 IOPS, 47.18 MiB/s [2024-11-05T11:26:22.700Z] 6043.50 IOPS, 47.21 MiB/s 00:11:53.462 Latency(us) 00:11:53.462 [2024-11-05T11:26:22.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.462 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:53.462 Verification LBA range: start 0x0 length 0x1000 00:11:53.462 Nvme1n1 : 10.02 6044.43 47.22 0.00 0.00 21119.37 1711.22 28738.75 00:11:53.462 [2024-11-05T11:26:22.700Z] =================================================================================================================== 00:11:53.462 [2024-11-05T11:26:22.700Z] Total : 6044.43 47.22 0.00 0.00 21119.37 1711.22 28738.75 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=561645 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:53.462 { 00:11:53.462 "params": { 00:11:53.462 "name": "Nvme$subsystem", 00:11:53.462 "trtype": "$TEST_TRANSPORT", 00:11:53.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:53.462 "adrfam": "ipv4", 00:11:53.462 "trsvcid": "$NVMF_PORT", 00:11:53.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:53.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:53.462 "hdgst": ${hdgst:-false}, 00:11:53.462 "ddgst": ${ddgst:-false} 00:11:53.462 }, 00:11:53.462 "method": "bdev_nvme_attach_controller" 00:11:53.462 } 00:11:53.462 EOF 00:11:53.462 )") 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:53.462 [2024-11-05 12:26:22.549208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.462 [2024-11-05 12:26:22.549250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:53.462 12:26:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:53.462 "params": { 00:11:53.462 "name": "Nvme1", 00:11:53.462 "trtype": "tcp", 00:11:53.462 "traddr": "10.0.0.2", 00:11:53.462 "adrfam": "ipv4", 00:11:53.462 "trsvcid": "4420", 00:11:53.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:53.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:53.462 "hdgst": false, 00:11:53.462 "ddgst": false 00:11:53.463 }, 00:11:53.463 "method": "bdev_nvme_attach_controller" 00:11:53.463 }' 00:11:53.463 [2024-11-05 12:26:22.557166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.557191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.565179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.565203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.573197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.573233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.581230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.581253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.586101] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:11:53.463 [2024-11-05 12:26:22.586173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561645 ] 00:11:53.463 [2024-11-05 12:26:22.589256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.589280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.597275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.597298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.605283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.605307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.613301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.613324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.621338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.621360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.629340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.629362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.637366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.637389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.645382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.645404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.653403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.653425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.656996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.463 [2024-11-05 12:26:22.661430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.661454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.669475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.669514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.677475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.677501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.685488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.685509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.693508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.693529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.463 [2024-11-05 12:26:22.701548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.463 [2024-11-05 12:26:22.701571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.721 [2024-11-05 12:26:22.707035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.721 [2024-11-05 12:26:22.709556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.721 [2024-11-05 12:26:22.709577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.721 [2024-11-05 12:26:22.717572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.721 [2024-11-05 12:26:22.717594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.721 [2024-11-05 12:26:22.725621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.721 [2024-11-05 12:26:22.725658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.721 [2024-11-05 12:26:22.733641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.721 [2024-11-05 12:26:22.733679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.721 [2024-11-05 12:26:22.741661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.721 [2024-11-05 12:26:22.741699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.721 [2024-11-05 12:26:22.749684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.721 [2024-11-05 12:26:22.749724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.757710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.757749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.765728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.765767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.773726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.773749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.781764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.781794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.789792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.789828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.797818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.797880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.805812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.805848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.813837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.813883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.821901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.821927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.829924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.829949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.837944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.837969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.845963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.845987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.853966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.853990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.861997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.862020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.870018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.870040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.878040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.878063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.886063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.886085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.894087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.894111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.902111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.902150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.910144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.910166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.918168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.918190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.926198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.926234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.934235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.934257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.942246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.942268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.950254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.950277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.722 [2024-11-05 12:26:22.958293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.722 [2024-11-05 12:26:22.958315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:22.966311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:22.966333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:22.974333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:22.974355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:22.982355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:22.982376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:22.990380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:22.990402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:22.998398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:22.998420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.006421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.006442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.014443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.014464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.022467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.022488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.030490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.030512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.038517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.038539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.083128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.083156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.090662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.090685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 Running I/O for 5 seconds... 00:11:53.980 [2024-11-05 12:26:23.098682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.098704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.112901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.112931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.123953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.123988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.980 [2024-11-05 12:26:23.136854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.980 [2024-11-05 12:26:23.136893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.147711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.147744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.158615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.158644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.171402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.171446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.181814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.181842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.192819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.192872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.203541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.203568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.981 [2024-11-05 12:26:23.214505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.981 [2024-11-05 12:26:23.214535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.227097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.227125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.237300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.237328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.247872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.247900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.258640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.258668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.271323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.271351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.281627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.281654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.292246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.292274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.302849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.302899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.313564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.313592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.324232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.324260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.334505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.334556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.345265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.345294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.355775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.355803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.366746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.366774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.377540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.377568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.388359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.388387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.399058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.399086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.411921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.411950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.423999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.424026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.433089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.433117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.444738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.444767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.457819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.457847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.468094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.468121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.239 [2024-11-05 12:26:23.479079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.239 [2024-11-05 12:26:23.479108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.491066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.491094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.499648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.499676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.512765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.512792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.522923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.522951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.533549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.533577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.543925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.543965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.554855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.497 [2024-11-05 12:26:23.554891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.497 [2024-11-05 12:26:23.567527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.567556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.578116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.578144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.588771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.588799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.601672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.601700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.611756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.611785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.622405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.622433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.635767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.635795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.646186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.646214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.656608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.656636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.667317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.667345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.677688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.677716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.688424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.688451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.701002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.701030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.711087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.711115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.721840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.721876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.498 [2024-11-05 12:26:23.734302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.498 [2024-11-05 12:26:23.734331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.746087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.746115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.755002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.755029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.766516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.766543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.779095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.779122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.788948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.788975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.800033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.800061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.811005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.811032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.821620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.821647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.834001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.834029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.843758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.843786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.854572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.854600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.865484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.865513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.878069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.878097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.889940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.889967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.899222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.899249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.910131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.910159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.920269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.920297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.930986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.931014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.943983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.944011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.954459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.954487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.964798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.964826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.975602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.975630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.986371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.986399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.756 [2024-11-05 12:26:23.996981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.756 [2024-11-05 12:26:23.997009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.014 [2024-11-05 12:26:24.007942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.007970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.019009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.019037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.032985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.033013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.043398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.043426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.053966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.053994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.064448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.064475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.074919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.074947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.085669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.085697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.096153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.096181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 11795.00 IOPS, 92.15 MiB/s [2024-11-05T11:26:24.253Z] [2024-11-05 12:26:24.107023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.107052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.120008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.120036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.130068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.130096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.140566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.140594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.151450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.151479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.162678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.162705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.173542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.173571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.184576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.184604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.196978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.197006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.208576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.208604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.217637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.217665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.229169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.229197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.241884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.241911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.015 [2024-11-05 12:26:24.252219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.015 [2024-11-05 12:26:24.252247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.262629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.262658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.273661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.273689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.286414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.286441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.296635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.296663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.307434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.307461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.318190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.318218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.329400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.329429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.342061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.342090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.352216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.352244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.362729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.362758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.373748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.373783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.386454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.386483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.396376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.396404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.406953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.406981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.419164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.419192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.428853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.273 [2024-11-05 12:26:24.428890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.273 [2024-11-05 12:26:24.439665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.439695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.274 [2024-11-05 12:26:24.452615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.452654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.274 [2024-11-05 12:26:24.462335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.462363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.274 [2024-11-05 12:26:24.472945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.472974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.274 [2024-11-05 12:26:24.486059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.486087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.274 [2024-11-05 12:26:24.496437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.496465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.274 [2024-11-05 12:26:24.507201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.274 [2024-11-05 12:26:24.507240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.517870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.517899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.528707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.528735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.538810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.538838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.549275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.549303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.559549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.559577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.570608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.570636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.580990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.581027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.591770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.591799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.602548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.602576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.615167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.615197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.625456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.625485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.636013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.636042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.646349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.646377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.656676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.656704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.667218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.667246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.677498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.677526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.688198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.688225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.698836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.698876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.709350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.709393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.720064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.720092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.730363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.730390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.741020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.741048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.751653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.751681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.762075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.762102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.532 [2024-11-05 12:26:24.772393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.532 [2024-11-05 12:26:24.772421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.783087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.783123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.793964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.793992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.804785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.804813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.815760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.815788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.826483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.826511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.839064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.839092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.848666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.848694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.859561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.859588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.870724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.870766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.881702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.881730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.892440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.892468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.904716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.904744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.914637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.914666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.925500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.925529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.936389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.936417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.946991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.947019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.958022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.958050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.969007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.969035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.981615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.981643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:24.992060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:24.992099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:25.002791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:25.002819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:25.015920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.790 [2024-11-05 12:26:25.015949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.790 [2024-11-05 12:26:25.025661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.791 [2024-11-05 12:26:25.025689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.036595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.036623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.047526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.047553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.058412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.058440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.069282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.069309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.082605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.082633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.092755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.092784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.103241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.103268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 11848.00 IOPS, 92.56 MiB/s [2024-11-05T11:26:25.286Z] [2024-11-05 12:26:25.114093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.114120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.126795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.126838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.137162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.137190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.147514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.147542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.158169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.158196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.168473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.168501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.178847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.178882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.189667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.189695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.200516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.200544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.213068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.213096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.223307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.223335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.233830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.233869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.244348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.244376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.254960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.254988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.265547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.265574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.276305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.276332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.048 [2024-11-05 12:26:25.289172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.048 [2024-11-05 12:26:25.289199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.301072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.301100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.309984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.310012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.321681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.321709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.334195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.334223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.346017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.346045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.355133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.355166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.366724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.366752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.379429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.379456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.391309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.391337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.400809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.400837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.411855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.411892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.424347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.424375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.434139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.434168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.444572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.444600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.454985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.455014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.465614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.465643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.476422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.476450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.489010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.489038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.499159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.499187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.509833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.509869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.520554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.520582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.531016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.531043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.307 [2024-11-05 12:26:25.541933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.307 [2024-11-05 12:26:25.541961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.552565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.552593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.563724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.563752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.574968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.574998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.585611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.585639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.596307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.596336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.608972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.609010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.619431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.619459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.630302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.630331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.643002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.565 [2024-11-05 12:26:25.643030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.565 [2024-11-05 12:26:25.654278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.654306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.663641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.663669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.675094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.675122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.688682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.688725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.700911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.700940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.710542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.710570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.722181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.722209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.733145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.733173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.743917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.743945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.754707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.754734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.768447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.768475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.778796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.778824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.789367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.789395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.566 [2024-11-05 12:26:25.801919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.566 [2024-11-05 12:26:25.801947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.812218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.812246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.822782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.822819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.833684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.833711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.846154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.846182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.856398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.856426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.867616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.867644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.880017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.880045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.889481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.889510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.902354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.902382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.912741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.912769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.923270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.923298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.933751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.933779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.944273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.944301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.955443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.955470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.967010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.967038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.977189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.977217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.987772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.987800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:25.998544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:25.998571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:26.009307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:26.009336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:26.020124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:26.020153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:26.030774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:26.030811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:26.041981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:26.042009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:26.054560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:26.054588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.824 [2024-11-05 12:26:26.064679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.824 [2024-11-05 12:26:26.064707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.075470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.075497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.086290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.086318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.096904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.096932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 11859.67 IOPS, 92.65 MiB/s [2024-11-05T11:26:26.320Z] [2024-11-05 12:26:26.107630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.107658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.118560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.118603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.131172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.131200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.141474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.141502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.152044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.152072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.165113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.165141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.175428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.082 [2024-11-05 12:26:26.175456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.082 [2024-11-05 12:26:26.186057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.186086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.198543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.198571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.208213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.208241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.221073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.221100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.230948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.230976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.241417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.241453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.251846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.251883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.262161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.262189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.272661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.272689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.283036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.283064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.293818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.293847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.306515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.306557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.083 [2024-11-05 12:26:26.316511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.083 [2024-11-05 12:26:26.316539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.326990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.327018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.339920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.339948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.349972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.350000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.360553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.360581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.371325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.371354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.381828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.381856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.392633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.392661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.403051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.403079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.414048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.414075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.426368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.426410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.436139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.436168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.447051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.447079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.457559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.457587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.467684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.467712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.478293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.478321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.488565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.488593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.499142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.499170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.511360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.511388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.521747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.521775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.532437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.532465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.545442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.545469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.555510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.555538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.566029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.566057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.341 [2024-11-05 12:26:26.576940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.341 [2024-11-05 12:26:26.576969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.587676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.587705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.600422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.600450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.610426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.610469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.621311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.621339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.633706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.633734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.644113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.644143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.654797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.654827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.665605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.665634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.675928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.675956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.686389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.686417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.697118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.697145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.708019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.708047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.720717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.720745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.732211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.732255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.741310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.741338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.752681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.752710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.765338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.599 [2024-11-05 12:26:26.765381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.599 [2024-11-05 12:26:26.775742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.775770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.600 [2024-11-05 12:26:26.786442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.786470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.600 [2024-11-05 12:26:26.796940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.796968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.600 [2024-11-05 12:26:26.807684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.807720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.600 [2024-11-05 12:26:26.818365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.818400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.600 [2024-11-05 12:26:26.829622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.829650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.600 [2024-11-05 12:26:26.839983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.600 [2024-11-05 12:26:26.840011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.850522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.850550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.860855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.860890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.871723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.871751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.882301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.882329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.892788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.892815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.903414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.903442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.913906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.913934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.924211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.924239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.934732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.934760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.945126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.945153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.956004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.956032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.969326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.969354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.979469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.979498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:26.990202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:26.990230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.000972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.001000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.011870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.011897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.024193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.024221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.034463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.034490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.045527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.045555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.057976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.058011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.069660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.069687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.078499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.078527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.857 [2024-11-05 12:26:27.090060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.857 [2024-11-05 12:26:27.090088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.102351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.102380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 11883.00 IOPS, 92.84 MiB/s [2024-11-05T11:26:27.352Z] [2024-11-05 12:26:27.111920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.111947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.122971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.122999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.133855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.133891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.146346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.146374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.156237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.156265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.166849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.166887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.179201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.179228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.189200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.189228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.200144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.200172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.211225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.211253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.222832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.222868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.233579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.233607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.246265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.246308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.256288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.256316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.266600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.266637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.277269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.277297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.287729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.287757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.298122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.298150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.308349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.308377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.319010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.319038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.331388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.331416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.340689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.340717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.114 [2024-11-05 12:26:27.351784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.114 [2024-11-05 12:26:27.351812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.362295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.362322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.373538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.373567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.384031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.384059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.394478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.394506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.405353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.405381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.417947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.417976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.428096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.428124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.438745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.438773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.450011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.450039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.461113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.461141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.474395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.474448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.484979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.485007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.495430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.495457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.506230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.506257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.516703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.516731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.527016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.527044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.537431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.537459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.547907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.372 [2024-11-05 12:26:27.547935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.372 [2024-11-05 12:26:27.558042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.373 [2024-11-05 12:26:27.558070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.373 [2024-11-05 12:26:27.568066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.373 [2024-11-05 12:26:27.568094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.373 [2024-11-05 12:26:27.578738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.373 [2024-11-05 12:26:27.578766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.373 [2024-11-05 12:26:27.589407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.373 [2024-11-05 12:26:27.589450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.373 [2024-11-05 12:26:27.599983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.373 [2024-11-05 12:26:27.600011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.373 [2024-11-05 12:26:27.610412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.373 [2024-11-05 12:26:27.610440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.621108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.621135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.634173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.634200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.643811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.643839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.654711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.654739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.665732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.665761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.676578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.676606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.689229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.689256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.699038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.699066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.709616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.709644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.720473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.720501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.731152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.731180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.742157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.742199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.753103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.753130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.763819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.763847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.779140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.779170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.789607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.789635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.800175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.800203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.811189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.811217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.822175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.822204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.833072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.833101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.845804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.845833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.857547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.857576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.631 [2024-11-05 12:26:27.866660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.631 [2024-11-05 12:26:27.866688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.878110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.878149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.888657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.888685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.899406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.899434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.911974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.912003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.922229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.922272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.932920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.932949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.943335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.943363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.953823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.953851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.964513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.964542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.975026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.975055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.985542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.985570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:27.996187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:27.996216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.006693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.006722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.019282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.019310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.029412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.029440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.040037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.040065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.050625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.050653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.061487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.061516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.074132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.074159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.083960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.083988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.094467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.094495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 [2024-11-05 12:26:28.106960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.106988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 11884.00 IOPS, 92.84 MiB/s [2024-11-05T11:26:28.128Z] [2024-11-05 12:26:28.115719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.115746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:58.890 00:11:58.890 Latency(us) 00:11:58.890 [2024-11-05T11:26:28.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.890 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:58.890 Nvme1n1 : 5.01 11895.53 92.93 0.00 0.00 10749.04 4684.61 21456.97 00:11:58.890 [2024-11-05T11:26:28.128Z] =================================================================================================================== 00:11:58.890 [2024-11-05T11:26:28.128Z] Total : 11895.53 92.93 0.00 0.00 10749.04 4684.61 21456.97 00:11:58.890 [2024-11-05 12:26:28.123157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:58.890 [2024-11-05 12:26:28.123182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.131180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.131221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.139223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.139273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.147256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.147314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.155273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.155328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.163288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.163343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.171309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.171360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.179337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.179393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.187361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.187414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.195381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.195434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.203401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.203454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.211424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.211478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.219448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.219517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.227475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.227531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.235489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.235542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.243512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.243565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.251533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.251586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.259534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.259576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.267532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.267555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.275588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.275632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.283616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.283665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.291642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.291693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.299613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.299634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.307636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.307657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 [2024-11-05 12:26:28.315652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:59.148 [2024-11-05 12:26:28.315672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:59.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (561645) - No such process 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 561645 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:59.148 delay0 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.148 12:26:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:59.405 [2024-11-05 12:26:28.434707] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:05.960 Initializing NVMe Controllers 00:12:05.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:05.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:05.960 Initialization complete. Launching workers. 00:12:05.960 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 861 00:12:05.960 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1148, failed to submit 33 00:12:05.960 success 979, unsuccessful 169, failed 0 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.960 rmmod nvme_tcp 00:12:05.960 rmmod nvme_fabrics 00:12:05.960 rmmod nvme_keyring 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 560301 ']' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 560301 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 560301 ']' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 560301 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 560301 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 560301' 00:12:05.960 killing process with pid 560301 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 560301 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 560301 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.960 12:26:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.868 12:26:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.868 00:12:07.868 real 0m27.960s 00:12:07.868 user 0m41.286s 00:12:07.868 sys 0m8.195s 00:12:07.868 12:26:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:07.868 12:26:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:07.868 ************************************ 00:12:07.868 END TEST nvmf_zcopy 00:12:07.868 ************************************ 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.868 ************************************ 00:12:07.868 START TEST nvmf_nmic 00:12:07.868 ************************************ 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:07.868 * Looking for test storage... 00:12:07.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:07.868 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.129 --rc genhtml_branch_coverage=1 00:12:08.129 --rc genhtml_function_coverage=1 00:12:08.129 --rc genhtml_legend=1 00:12:08.129 --rc geninfo_all_blocks=1 00:12:08.129 --rc geninfo_unexecuted_blocks=1 00:12:08.129 00:12:08.129 ' 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.129 --rc genhtml_branch_coverage=1 00:12:08.129 --rc genhtml_function_coverage=1 00:12:08.129 --rc genhtml_legend=1 00:12:08.129 --rc geninfo_all_blocks=1 00:12:08.129 --rc geninfo_unexecuted_blocks=1 00:12:08.129 00:12:08.129 ' 00:12:08.129 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.130 --rc genhtml_branch_coverage=1 00:12:08.130 --rc genhtml_function_coverage=1 00:12:08.130 --rc genhtml_legend=1 00:12:08.130 --rc geninfo_all_blocks=1 00:12:08.130 --rc geninfo_unexecuted_blocks=1 00:12:08.130 00:12:08.130 ' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.130 --rc genhtml_branch_coverage=1 00:12:08.130 --rc genhtml_function_coverage=1 00:12:08.130 --rc genhtml_legend=1 00:12:08.130 --rc geninfo_all_blocks=1 00:12:08.130 --rc geninfo_unexecuted_blocks=1 00:12:08.130 00:12:08.130 ' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.130 12:26:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:10.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:10.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:10.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:10.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.662 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:12:10.663 00:12:10.663 --- 10.0.0.2 ping statistics --- 00:12:10.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.663 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:10.663 00:12:10.663 --- 10.0.0.1 ping statistics --- 00:12:10.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.663 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=565044 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 565044 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 565044 ']' 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.663 [2024-11-05 12:26:39.599558] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:12:10.663 [2024-11-05 12:26:39.599665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.663 [2024-11-05 12:26:39.678197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.663 [2024-11-05 12:26:39.728311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.663 [2024-11-05 12:26:39.728374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.663 [2024-11-05 12:26:39.728402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.663 [2024-11-05 12:26:39.728413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.663 [2024-11-05 12:26:39.728423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.663 [2024-11-05 12:26:39.729981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.663 [2024-11-05 12:26:39.730039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.663 [2024-11-05 12:26:39.730105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.663 [2024-11-05 12:26:39.730107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.663 [2024-11-05 12:26:39.884259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.663 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 Malloc0 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 [2024-11-05 12:26:39.952727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:10.921 test case1: single bdev can't be used in multiple subsystems 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.921 [2024-11-05 12:26:39.976549] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:10.921 [2024-11-05 12:26:39.976578] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:10.921 [2024-11-05 12:26:39.976593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.921 request: 00:12:10.921 { 00:12:10.921 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:10.921 "namespace": { 00:12:10.921 "bdev_name": "Malloc0", 00:12:10.921 "no_auto_visible": false 00:12:10.921 }, 00:12:10.921 "method": "nvmf_subsystem_add_ns", 00:12:10.921 "req_id": 1 00:12:10.921 } 00:12:10.921 Got JSON-RPC error response 00:12:10.921 response: 00:12:10.921 { 00:12:10.921 "code": -32602, 00:12:10.921 "message": "Invalid parameters" 00:12:10.921 } 00:12:10.921 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:10.922 Adding namespace failed - expected result. 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:10.922 test case2: host connect to nvmf target in multiple paths 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:10.922 [2024-11-05 12:26:39.984660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.922 12:26:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.487 12:26:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:12.420 12:26:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.420 12:26:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:12:12.420 12:26:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.420 12:26:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:12.420 12:26:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:12:14.317 12:26:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:14.317 [global] 00:12:14.317 thread=1 00:12:14.317 invalidate=1 00:12:14.317 rw=write 00:12:14.317 time_based=1 00:12:14.317 runtime=1 00:12:14.317 ioengine=libaio 00:12:14.317 direct=1 00:12:14.317 bs=4096 00:12:14.317 iodepth=1 00:12:14.317 norandommap=0 00:12:14.317 numjobs=1 00:12:14.317 00:12:14.317 verify_dump=1 00:12:14.317 verify_backlog=512 00:12:14.317 verify_state_save=0 00:12:14.317 do_verify=1 00:12:14.317 verify=crc32c-intel 00:12:14.317 [job0] 00:12:14.317 filename=/dev/nvme0n1 00:12:14.317 Could not set queue depth (nvme0n1) 00:12:14.317 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.317 fio-3.35 00:12:14.317 Starting 1 thread 00:12:15.691 00:12:15.691 job0: (groupid=0, jobs=1): err= 0: pid=565559: Tue Nov 5 12:26:44 2024 00:12:15.691 read: IOPS=22, BW=89.2KiB/s (91.4kB/s)(92.0KiB/1031msec) 00:12:15.691 slat (nsec): min=7614, max=33258, avg=23205.91, stdev=8988.22 00:12:15.691 clat (usec): min=40886, max=41116, avg=40973.85, stdev=50.43 00:12:15.691 lat (usec): min=40919, max=41123, avg=40997.05, stdev=45.26 00:12:15.691 clat percentiles (usec): 00:12:15.691 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:15.691 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:15.691 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:15.691 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:15.691 | 99.99th=[41157] 00:12:15.691 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:12:15.691 slat (nsec): min=7389, max=47960, avg=15110.85, stdev=5836.19 00:12:15.691 clat (usec): min=124, max=350, avg=152.12, stdev=18.33 00:12:15.691 lat (usec): min=132, max=385, avg=167.24, stdev=19.58 00:12:15.691 clat percentiles (usec): 00:12:15.691 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:12:15.691 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 153], 00:12:15.691 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:12:15.691 | 99.00th=[ 235], 99.50th=[ 281], 99.90th=[ 351], 99.95th=[ 351], 00:12:15.691 | 99.99th=[ 351] 00:12:15.691 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:15.691 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:15.691 lat (usec) : 250=95.14%, 500=0.56% 00:12:15.691 lat (msec) : 50=4.30% 00:12:15.691 cpu : usr=0.29%, sys=0.87%, ctx=535, majf=0, minf=1 00:12:15.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.691 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.691 00:12:15.691 Run status group 0 (all jobs): 00:12:15.691 READ: bw=89.2KiB/s (91.4kB/s), 89.2KiB/s-89.2KiB/s (91.4kB/s-91.4kB/s), io=92.0KiB (94.2kB), run=1031-1031msec 00:12:15.691 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:12:15.691 00:12:15.691 Disk stats (read/write): 00:12:15.691 nvme0n1: ios=69/512, merge=0/0, ticks=929/79, in_queue=1008, util=95.89% 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.691 rmmod nvme_tcp 00:12:15.691 rmmod nvme_fabrics 00:12:15.691 rmmod nvme_keyring 00:12:15.691 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 565044 ']' 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 565044 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 565044 ']' 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 565044 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 565044 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 565044' 00:12:15.949 killing process with pid 565044 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 565044 00:12:15.949 12:26:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 565044 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.209 12:26:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.115 00:12:18.115 real 0m10.210s 00:12:18.115 user 0m22.986s 00:12:18.115 sys 0m2.466s 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:18.115 ************************************ 00:12:18.115 END TEST nvmf_nmic 00:12:18.115 ************************************ 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:18.115 ************************************ 00:12:18.115 START TEST nvmf_fio_target 00:12:18.115 ************************************ 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:18.115 * Looking for test storage... 00:12:18.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:18.115 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:18.376 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.377 --rc genhtml_branch_coverage=1 00:12:18.377 --rc genhtml_function_coverage=1 00:12:18.377 --rc genhtml_legend=1 00:12:18.377 --rc geninfo_all_blocks=1 00:12:18.377 --rc geninfo_unexecuted_blocks=1 00:12:18.377 00:12:18.377 ' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.377 --rc genhtml_branch_coverage=1 00:12:18.377 --rc genhtml_function_coverage=1 00:12:18.377 --rc genhtml_legend=1 00:12:18.377 --rc geninfo_all_blocks=1 00:12:18.377 --rc geninfo_unexecuted_blocks=1 00:12:18.377 00:12:18.377 ' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.377 --rc genhtml_branch_coverage=1 00:12:18.377 --rc genhtml_function_coverage=1 00:12:18.377 --rc genhtml_legend=1 00:12:18.377 --rc geninfo_all_blocks=1 00:12:18.377 --rc geninfo_unexecuted_blocks=1 00:12:18.377 00:12:18.377 ' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:18.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.377 --rc genhtml_branch_coverage=1 00:12:18.377 --rc genhtml_function_coverage=1 00:12:18.377 --rc genhtml_legend=1 00:12:18.377 --rc geninfo_all_blocks=1 00:12:18.377 --rc geninfo_unexecuted_blocks=1 00:12:18.377 00:12:18.377 ' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.377 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.378 12:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:20.912 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:20.912 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.912 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:20.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:20.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:20.913 00:12:20.913 --- 10.0.0.2 ping statistics --- 00:12:20.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.913 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:12:20.913 00:12:20.913 --- 10.0.0.1 ping statistics --- 00:12:20.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.913 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=567775 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 567775 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 567775 ']' 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:20.913 12:26:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.913 [2024-11-05 12:26:49.853281] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:12:20.913 [2024-11-05 12:26:49.853360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.913 [2024-11-05 12:26:49.923822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.913 [2024-11-05 12:26:49.968308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.913 [2024-11-05 12:26:49.968363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.913 [2024-11-05 12:26:49.968376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.913 [2024-11-05 12:26:49.968387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.913 [2024-11-05 12:26:49.968398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.913 [2024-11-05 12:26:49.969921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.913 [2024-11-05 12:26:49.970044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.913 [2024-11-05 12:26:49.970108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.913 [2024-11-05 12:26:49.970111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.913 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.913 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:12:20.913 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.913 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.913 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.913 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.914 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:21.171 [2024-11-05 12:26:50.355604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.171 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.737 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:21.737 12:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:21.995 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:21.995 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.253 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:22.253 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:22.511 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:22.511 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:22.769 12:26:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.027 12:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:23.027 12:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.285 12:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:23.285 12:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.543 12:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:23.543 12:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:23.801 12:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.058 12:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:24.058 12:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:24.316 12:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:24.316 12:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.574 12:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.832 [2024-11-05 12:26:54.069391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.090 12:26:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:25.347 12:26:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:25.611 12:26:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.282 12:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:26.282 12:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:12:26.282 12:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.282 12:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:12:26.282 12:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:12:26.282 12:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:12:28.205 12:26:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:28.205 [global] 00:12:28.205 thread=1 00:12:28.205 invalidate=1 00:12:28.205 rw=write 00:12:28.205 time_based=1 00:12:28.205 runtime=1 00:12:28.205 ioengine=libaio 00:12:28.205 direct=1 00:12:28.205 bs=4096 00:12:28.205 iodepth=1 00:12:28.205 norandommap=0 00:12:28.205 numjobs=1 00:12:28.205 00:12:28.205 verify_dump=1 00:12:28.205 verify_backlog=512 00:12:28.205 verify_state_save=0 00:12:28.205 do_verify=1 00:12:28.205 verify=crc32c-intel 00:12:28.205 [job0] 00:12:28.205 filename=/dev/nvme0n1 00:12:28.205 [job1] 00:12:28.205 filename=/dev/nvme0n2 00:12:28.205 [job2] 00:12:28.205 filename=/dev/nvme0n3 00:12:28.205 [job3] 00:12:28.205 filename=/dev/nvme0n4 00:12:28.205 Could not set queue depth (nvme0n1) 00:12:28.205 Could not set queue depth (nvme0n2) 00:12:28.205 Could not set queue depth (nvme0n3) 00:12:28.205 Could not set queue depth (nvme0n4) 00:12:28.463 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.463 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.463 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.463 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:28.463 fio-3.35 00:12:28.463 Starting 4 threads 00:12:29.836 00:12:29.836 job0: (groupid=0, jobs=1): err= 0: pid=568853: Tue Nov 5 12:26:58 2024 00:12:29.836 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:29.836 slat (nsec): min=6088, max=49182, avg=11726.77, stdev=5633.78 00:12:29.836 clat (usec): min=184, max=3455, avg=266.24, stdev=100.62 00:12:29.836 lat (usec): min=191, max=3461, avg=277.96, stdev=102.05 00:12:29.836 clat percentiles (usec): 00:12:29.836 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:12:29.836 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 255], 00:12:29.836 | 70.00th=[ 273], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 363], 00:12:29.836 | 99.00th=[ 453], 99.50th=[ 494], 99.90th=[ 1205], 99.95th=[ 1926], 00:12:29.836 | 99.99th=[ 3458] 00:12:29.836 write: IOPS=2065, BW=8264KiB/s (8462kB/s)(8272KiB/1001msec); 0 zone resets 00:12:29.836 slat (nsec): min=7579, max=64080, avg=14414.58, stdev=6789.14 00:12:29.836 clat (usec): min=135, max=1265, avg=186.14, stdev=35.53 00:12:29.836 lat (usec): min=148, max=1291, avg=200.55, stdev=37.92 00:12:29.836 clat percentiles (usec): 00:12:29.836 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:12:29.836 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:12:29.836 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 223], 00:12:29.836 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 347], 99.95th=[ 791], 00:12:29.836 | 99.99th=[ 1270] 00:12:29.836 bw ( KiB/s): min= 8192, max= 8192, per=24.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:29.836 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:29.836 lat (usec) : 250=77.53%, 500=22.21%, 750=0.15%, 1000=0.02% 00:12:29.836 lat (msec) : 2=0.07%, 4=0.02% 00:12:29.836 cpu : usr=3.80%, sys=7.70%, ctx=4116, majf=0, minf=2 00:12:29.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.836 issued rwts: total=2048,2068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.836 job1: (groupid=0, jobs=1): err= 0: pid=568854: Tue Nov 5 12:26:58 2024 00:12:29.836 read: IOPS=1539, BW=6160KiB/s (6308kB/s)(6172KiB/1002msec) 00:12:29.836 slat (nsec): min=4340, max=45660, avg=9482.41, stdev=4904.10 00:12:29.836 clat (usec): min=183, max=41657, avg=384.34, stdev=2542.18 00:12:29.836 lat (usec): min=188, max=41664, avg=393.82, stdev=2542.50 00:12:29.836 clat percentiles (usec): 00:12:29.836 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:12:29.836 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:12:29.836 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 269], 00:12:29.836 | 99.00th=[ 359], 99.50th=[ 478], 99.90th=[41157], 99.95th=[41681], 00:12:29.836 | 99.99th=[41681] 00:12:29.836 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:12:29.836 slat (nsec): min=5782, max=55923, avg=10588.34, stdev=5852.93 00:12:29.836 clat (usec): min=132, max=604, avg=176.68, stdev=37.55 00:12:29.836 lat (usec): min=138, max=614, avg=187.27, stdev=40.60 00:12:29.836 clat percentiles (usec): 00:12:29.836 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:12:29.836 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:12:29.836 | 70.00th=[ 176], 80.00th=[ 215], 90.00th=[ 239], 95.00th=[ 255], 00:12:29.836 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 351], 00:12:29.836 | 99.99th=[ 603] 00:12:29.836 bw ( KiB/s): min= 6360, max=10024, per=24.99%, avg=8192.00, stdev=2590.84, samples=2 00:12:29.836 iops : min= 1590, max= 2506, avg=2048.00, stdev=647.71, samples=2 00:12:29.836 lat (usec) : 250=91.26%, 500=8.55%, 750=0.03% 00:12:29.836 lat (msec) : 50=0.17% 00:12:29.836 cpu : usr=2.30%, sys=3.50%, ctx=3591, majf=0, minf=2 00:12:29.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.836 issued rwts: total=1543,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.836 job2: (groupid=0, jobs=1): err= 0: pid=568857: Tue Nov 5 12:26:58 2024 00:12:29.836 read: IOPS=1598, BW=6394KiB/s (6547kB/s)(6400KiB/1001msec) 00:12:29.836 slat (nsec): min=6036, max=50620, avg=11808.92, stdev=5926.36 00:12:29.836 clat (usec): min=217, max=576, avg=307.11, stdev=65.41 00:12:29.836 lat (usec): min=223, max=596, avg=318.92, stdev=67.96 00:12:29.836 clat percentiles (usec): 00:12:29.836 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:12:29.836 | 30.00th=[ 262], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 322], 00:12:29.836 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 453], 00:12:29.836 | 99.00th=[ 537], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 578], 00:12:29.836 | 99.99th=[ 578] 00:12:29.836 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:29.837 slat (nsec): min=7560, max=57339, avg=15607.19, stdev=7236.44 00:12:29.837 clat (usec): min=159, max=429, avg=216.41, stdev=20.90 00:12:29.837 lat (usec): min=168, max=442, avg=232.02, stdev=24.38 00:12:29.837 clat percentiles (usec): 00:12:29.837 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 200], 00:12:29.837 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:12:29.837 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 249], 00:12:29.837 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 302], 00:12:29.837 | 99.99th=[ 429] 00:12:29.837 bw ( KiB/s): min= 8192, max= 8192, per=24.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:29.837 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:29.837 lat (usec) : 250=61.40%, 500=37.61%, 750=0.99% 00:12:29.837 cpu : usr=4.60%, sys=6.10%, ctx=3649, majf=0, minf=1 00:12:29.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.837 issued rwts: total=1600,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.837 job3: (groupid=0, jobs=1): err= 0: pid=568858: Tue Nov 5 12:26:58 2024 00:12:29.837 read: IOPS=1632, BW=6529KiB/s (6686kB/s)(6536KiB/1001msec) 00:12:29.837 slat (nsec): min=6194, max=60758, avg=11230.86, stdev=5786.04 00:12:29.837 clat (usec): min=201, max=594, avg=289.10, stdev=55.96 00:12:29.837 lat (usec): min=208, max=603, avg=300.33, stdev=57.87 00:12:29.837 clat percentiles (usec): 00:12:29.837 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:12:29.837 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:12:29.837 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 379], 00:12:29.837 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 586], 99.95th=[ 594], 00:12:29.837 | 99.99th=[ 594] 00:12:29.837 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:29.837 slat (nsec): min=8056, max=54457, avg=16795.45, stdev=7493.99 00:12:29.837 clat (usec): min=151, max=408, avg=224.97, stdev=26.70 00:12:29.837 lat (usec): min=160, max=429, avg=241.77, stdev=29.71 00:12:29.837 clat percentiles (usec): 00:12:29.837 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 204], 00:12:29.837 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 229], 00:12:29.837 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 273], 00:12:29.837 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 367], 00:12:29.837 | 99.99th=[ 408] 00:12:29.837 bw ( KiB/s): min= 8192, max= 8192, per=24.99%, avg=8192.00, stdev= 0.00, samples=1 00:12:29.837 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:29.837 lat (usec) : 250=56.90%, 500=42.56%, 750=0.54% 00:12:29.837 cpu : usr=4.80%, sys=6.10%, ctx=3683, majf=0, minf=1 00:12:29.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.837 issued rwts: total=1634,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:29.837 00:12:29.837 Run status group 0 (all jobs): 00:12:29.837 READ: bw=26.6MiB/s (27.9MB/s), 6160KiB/s-8184KiB/s (6308kB/s-8380kB/s), io=26.7MiB (28.0MB), run=1001-1002msec 00:12:29.837 WRITE: bw=32.0MiB/s (33.6MB/s), 8176KiB/s-8264KiB/s (8372kB/s-8462kB/s), io=32.1MiB (33.6MB), run=1001-1002msec 00:12:29.837 00:12:29.837 Disk stats (read/write): 00:12:29.837 nvme0n1: ios=1586/1983, merge=0/0, ticks=429/351, in_queue=780, util=86.77% 00:12:29.837 nvme0n2: ios=1588/2048, merge=0/0, ticks=488/354, in_queue=842, util=90.84% 00:12:29.837 nvme0n3: ios=1532/1536, merge=0/0, ticks=526/308, in_queue=834, util=95.09% 00:12:29.837 nvme0n4: ios=1559/1569, merge=0/0, ticks=1333/338, in_queue=1671, util=94.32% 00:12:29.837 12:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:29.837 [global] 00:12:29.837 thread=1 00:12:29.837 invalidate=1 00:12:29.837 rw=randwrite 00:12:29.837 time_based=1 00:12:29.837 runtime=1 00:12:29.837 ioengine=libaio 00:12:29.837 direct=1 00:12:29.837 bs=4096 00:12:29.837 iodepth=1 00:12:29.837 norandommap=0 00:12:29.837 numjobs=1 00:12:29.837 00:12:29.837 verify_dump=1 00:12:29.837 verify_backlog=512 00:12:29.837 verify_state_save=0 00:12:29.837 do_verify=1 00:12:29.837 verify=crc32c-intel 00:12:29.837 [job0] 00:12:29.837 filename=/dev/nvme0n1 00:12:29.837 [job1] 00:12:29.837 filename=/dev/nvme0n2 00:12:29.837 [job2] 00:12:29.837 filename=/dev/nvme0n3 00:12:29.837 [job3] 00:12:29.837 filename=/dev/nvme0n4 00:12:29.837 Could not set queue depth (nvme0n1) 00:12:29.837 Could not set queue depth (nvme0n2) 00:12:29.837 Could not set queue depth (nvme0n3) 00:12:29.837 Could not set queue depth (nvme0n4) 00:12:29.837 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.837 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.837 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.837 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:29.837 fio-3.35 00:12:29.837 Starting 4 threads 00:12:31.210 00:12:31.210 job0: (groupid=0, jobs=1): err= 0: pid=569090: Tue Nov 5 12:27:00 2024 00:12:31.210 read: IOPS=1546, BW=6185KiB/s (6334kB/s)(6272KiB/1014msec) 00:12:31.210 slat (nsec): min=5633, max=57080, avg=10537.48, stdev=5551.27 00:12:31.210 clat (usec): min=166, max=42063, avg=370.82, stdev=2075.10 00:12:31.210 lat (usec): min=172, max=42079, avg=381.36, stdev=2075.48 00:12:31.210 clat percentiles (usec): 00:12:31.210 | 1.00th=[ 186], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:12:31.210 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:12:31.211 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 433], 95.00th=[ 453], 00:12:31.211 | 99.00th=[ 478], 99.50th=[ 611], 99.90th=[41157], 99.95th=[42206], 00:12:31.211 | 99.99th=[42206] 00:12:31.211 write: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec); 0 zone resets 00:12:31.211 slat (nsec): min=7659, max=54624, avg=13693.09, stdev=6412.61 00:12:31.211 clat (usec): min=127, max=892, avg=182.25, stdev=32.82 00:12:31.211 lat (usec): min=135, max=903, avg=195.94, stdev=35.83 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:12:31.211 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:12:31.211 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 221], 95.00th=[ 229], 00:12:31.211 | 99.00th=[ 253], 99.50th=[ 277], 99.90th=[ 578], 99.95th=[ 676], 00:12:31.211 | 99.99th=[ 898] 00:12:31.211 bw ( KiB/s): min= 6824, max= 9560, per=34.61%, avg=8192.00, stdev=1934.64, samples=2 00:12:31.211 iops : min= 1706, max= 2390, avg=2048.00, stdev=483.66, samples=2 00:12:31.211 lat (usec) : 250=83.46%, 500=16.12%, 750=0.22%, 1000=0.08% 00:12:31.211 lat (msec) : 50=0.11% 00:12:31.211 cpu : usr=3.95%, sys=5.13%, ctx=3618, majf=0, minf=1 00:12:31.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 issued rwts: total=1568,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.211 job1: (groupid=0, jobs=1): err= 0: pid=569091: Tue Nov 5 12:27:00 2024 00:12:31.211 read: IOPS=1147, BW=4591KiB/s (4701kB/s)(4628KiB/1008msec) 00:12:31.211 slat (nsec): min=5391, max=50425, avg=9679.86, stdev=5149.31 00:12:31.211 clat (usec): min=170, max=42105, avg=559.29, stdev=3614.90 00:12:31.211 lat (usec): min=176, max=42123, avg=568.97, stdev=3615.78 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 210], 20.00th=[ 221], 00:12:31.211 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:12:31.211 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:12:31.211 | 99.00th=[ 469], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:12:31.211 | 99.99th=[42206] 00:12:31.211 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:12:31.211 slat (nsec): min=7549, max=67903, avg=15997.35, stdev=8586.04 00:12:31.211 clat (usec): min=133, max=457, avg=204.63, stdev=44.09 00:12:31.211 lat (usec): min=145, max=486, avg=220.63, stdev=48.28 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 167], 00:12:31.211 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 196], 60.00th=[ 208], 00:12:31.211 | 70.00th=[ 221], 80.00th=[ 233], 90.00th=[ 251], 95.00th=[ 297], 00:12:31.211 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 424], 99.95th=[ 457], 00:12:31.211 | 99.99th=[ 457] 00:12:31.211 bw ( KiB/s): min= 4096, max= 8192, per=25.95%, avg=6144.00, stdev=2896.31, samples=2 00:12:31.211 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:12:31.211 lat (usec) : 250=83.33%, 500=16.26%, 750=0.04% 00:12:31.211 lat (msec) : 4=0.04%, 50=0.33% 00:12:31.211 cpu : usr=3.57%, sys=3.57%, ctx=2695, majf=0, minf=1 00:12:31.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 issued rwts: total=1157,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.211 job2: (groupid=0, jobs=1): err= 0: pid=569092: Tue Nov 5 12:27:00 2024 00:12:31.211 read: IOPS=21, BW=84.7KiB/s (86.7kB/s)(88.0KiB/1039msec) 00:12:31.211 slat (nsec): min=6855, max=37285, avg=17828.95, stdev=7129.36 00:12:31.211 clat (usec): min=40838, max=42075, avg=41353.90, stdev=504.89 00:12:31.211 lat (usec): min=40870, max=42092, avg=41371.73, stdev=504.95 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:31.211 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:31.211 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:31.211 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:31.211 | 99.99th=[42206] 00:12:31.211 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:12:31.211 slat (nsec): min=6345, max=63019, avg=15399.63, stdev=8011.64 00:12:31.211 clat (usec): min=149, max=417, avg=229.02, stdev=34.43 00:12:31.211 lat (usec): min=156, max=425, avg=244.42, stdev=31.38 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[ 163], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 204], 00:12:31.211 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:12:31.211 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 277], 95.00th=[ 289], 00:12:31.211 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 416], 99.95th=[ 416], 00:12:31.211 | 99.99th=[ 416] 00:12:31.211 bw ( KiB/s): min= 4096, max= 4096, per=17.30%, avg=4096.00, stdev= 0.00, samples=1 00:12:31.211 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:31.211 lat (usec) : 250=76.97%, 500=18.91% 00:12:31.211 lat (msec) : 50=4.12% 00:12:31.211 cpu : usr=0.48%, sys=0.58%, ctx=536, majf=0, minf=1 00:12:31.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.211 job3: (groupid=0, jobs=1): err= 0: pid=569093: Tue Nov 5 12:27:00 2024 00:12:31.211 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:31.211 slat (nsec): min=5924, max=36092, avg=10658.61, stdev=4744.52 00:12:31.211 clat (usec): min=202, max=594, avg=253.74, stdev=31.54 00:12:31.211 lat (usec): min=209, max=603, avg=264.40, stdev=32.96 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:12:31.211 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:12:31.211 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:12:31.211 | 99.00th=[ 379], 99.50th=[ 490], 99.90th=[ 594], 99.95th=[ 594], 00:12:31.211 | 99.99th=[ 594] 00:12:31.211 write: IOPS=2050, BW=8204KiB/s (8401kB/s)(8212KiB/1001msec); 0 zone resets 00:12:31.211 slat (nsec): min=7949, max=72131, avg=15064.92, stdev=7110.07 00:12:31.211 clat (usec): min=150, max=2883, avg=200.10, stdev=63.33 00:12:31.211 lat (usec): min=163, max=2899, avg=215.17, stdev=64.79 00:12:31.211 clat percentiles (usec): 00:12:31.211 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 178], 00:12:31.211 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:12:31.211 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 237], 00:12:31.211 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 318], 99.95th=[ 330], 00:12:31.211 | 99.99th=[ 2868] 00:12:31.211 bw ( KiB/s): min= 8192, max= 8192, per=34.61%, avg=8192.00, stdev= 0.00, samples=1 00:12:31.211 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:31.211 lat (usec) : 250=73.81%, 500=25.99%, 750=0.17% 00:12:31.211 lat (msec) : 4=0.02% 00:12:31.211 cpu : usr=4.10%, sys=6.90%, ctx=4102, majf=0, minf=1 00:12:31.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.211 issued rwts: total=2048,2053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:31.211 00:12:31.211 Run status group 0 (all jobs): 00:12:31.211 READ: bw=18.0MiB/s (18.9MB/s), 84.7KiB/s-8184KiB/s (86.7kB/s-8380kB/s), io=18.7MiB (19.6MB), run=1001-1039msec 00:12:31.211 WRITE: bw=23.1MiB/s (24.2MB/s), 1971KiB/s-8204KiB/s (2018kB/s-8401kB/s), io=24.0MiB (25.2MB), run=1001-1039msec 00:12:31.211 00:12:31.211 Disk stats (read/write): 00:12:31.211 nvme0n1: ios=1584/2048, merge=0/0, ticks=1023/359, in_queue=1382, util=86.07% 00:12:31.211 nvme0n2: ios=1067/1536, merge=0/0, ticks=1353/293, in_queue=1646, util=90.15% 00:12:31.211 nvme0n3: ios=74/512, merge=0/0, ticks=777/112, in_queue=889, util=94.80% 00:12:31.211 nvme0n4: ios=1558/2005, merge=0/0, ticks=1278/376, in_queue=1654, util=94.24% 00:12:31.211 12:27:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:31.211 [global] 00:12:31.211 thread=1 00:12:31.211 invalidate=1 00:12:31.211 rw=write 00:12:31.211 time_based=1 00:12:31.211 runtime=1 00:12:31.211 ioengine=libaio 00:12:31.211 direct=1 00:12:31.211 bs=4096 00:12:31.211 iodepth=128 00:12:31.211 norandommap=0 00:12:31.211 numjobs=1 00:12:31.211 00:12:31.211 verify_dump=1 00:12:31.211 verify_backlog=512 00:12:31.211 verify_state_save=0 00:12:31.211 do_verify=1 00:12:31.211 verify=crc32c-intel 00:12:31.211 [job0] 00:12:31.211 filename=/dev/nvme0n1 00:12:31.211 [job1] 00:12:31.211 filename=/dev/nvme0n2 00:12:31.211 [job2] 00:12:31.211 filename=/dev/nvme0n3 00:12:31.211 [job3] 00:12:31.211 filename=/dev/nvme0n4 00:12:31.211 Could not set queue depth (nvme0n1) 00:12:31.211 Could not set queue depth (nvme0n2) 00:12:31.211 Could not set queue depth (nvme0n3) 00:12:31.211 Could not set queue depth (nvme0n4) 00:12:31.469 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.469 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.469 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.469 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:31.469 fio-3.35 00:12:31.469 Starting 4 threads 00:12:32.843 00:12:32.843 job0: (groupid=0, jobs=1): err= 0: pid=569361: Tue Nov 5 12:27:01 2024 00:12:32.843 read: IOPS=2074, BW=8298KiB/s (8497kB/s)(8364KiB/1008msec) 00:12:32.843 slat (usec): min=3, max=26640, avg=272.04, stdev=1489.22 00:12:32.843 clat (msec): min=4, max=122, avg=34.82, stdev=24.74 00:12:32.843 lat (msec): min=10, max=122, avg=35.09, stdev=24.86 00:12:32.843 clat percentiles (msec): 00:12:32.843 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:12:32.843 | 30.00th=[ 20], 40.00th=[ 23], 50.00th=[ 29], 60.00th=[ 35], 00:12:32.843 | 70.00th=[ 44], 80.00th=[ 49], 90.00th=[ 55], 95.00th=[ 92], 00:12:32.843 | 99.00th=[ 123], 99.50th=[ 123], 99.90th=[ 123], 99.95th=[ 123], 00:12:32.843 | 99.99th=[ 123] 00:12:32.843 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:12:32.843 slat (usec): min=4, max=24962, avg=163.08, stdev=1030.62 00:12:32.843 clat (usec): min=9910, max=57417, avg=21304.01, stdev=9532.65 00:12:32.843 lat (usec): min=10611, max=57423, avg=21467.08, stdev=9549.42 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[11338], 5.00th=[12780], 10.00th=[12911], 20.00th=[14484], 00:12:32.843 | 30.00th=[16450], 40.00th=[17171], 50.00th=[17433], 60.00th=[17957], 00:12:32.843 | 70.00th=[22938], 80.00th=[27132], 90.00th=[33817], 95.00th=[36439], 00:12:32.843 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:12:32.843 | 99.99th=[57410] 00:12:32.843 bw ( KiB/s): min= 7512, max=12288, per=16.26%, avg=9900.00, stdev=3377.14, samples=2 00:12:32.843 iops : min= 1878, max= 3072, avg=2475.00, stdev=844.29, samples=2 00:12:32.843 lat (msec) : 10=0.28%, 20=50.44%, 50=39.32%, 100=7.93%, 250=2.02% 00:12:32.843 cpu : usr=1.39%, sys=3.97%, ctx=185, majf=0, minf=1 00:12:32.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:32.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.843 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.843 job1: (groupid=0, jobs=1): err= 0: pid=569362: Tue Nov 5 12:27:01 2024 00:12:32.843 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:12:32.843 slat (usec): min=2, max=15188, avg=123.65, stdev=782.47 00:12:32.843 clat (usec): min=1823, max=39858, avg=15728.37, stdev=4124.14 00:12:32.843 lat (usec): min=1828, max=39873, avg=15852.02, stdev=4184.63 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[ 7242], 5.00th=[11469], 10.00th=[11863], 20.00th=[12911], 00:12:32.843 | 30.00th=[13960], 40.00th=[14484], 50.00th=[15008], 60.00th=[16057], 00:12:32.843 | 70.00th=[16319], 80.00th=[17171], 90.00th=[19792], 95.00th=[23987], 00:12:32.843 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:12:32.843 | 99.99th=[40109] 00:12:32.843 write: IOPS=3729, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1010msec); 0 zone resets 00:12:32.843 slat (usec): min=3, max=26302, avg=135.91, stdev=923.70 00:12:32.843 clat (usec): min=846, max=66485, avg=19049.74, stdev=10577.66 00:12:32.843 lat (usec): min=853, max=66501, avg=19185.65, stdev=10667.03 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[ 2573], 5.00th=[ 6521], 10.00th=[ 9241], 20.00th=[11076], 00:12:32.843 | 30.00th=[11994], 40.00th=[13304], 50.00th=[14353], 60.00th=[17433], 00:12:32.843 | 70.00th=[22938], 80.00th=[30540], 90.00th=[34341], 95.00th=[39060], 00:12:32.843 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[55313], 00:12:32.843 | 99.99th=[66323] 00:12:32.843 bw ( KiB/s): min=12472, max=16640, per=23.91%, avg=14556.00, stdev=2947.22, samples=2 00:12:32.843 iops : min= 3118, max= 4160, avg=3639.00, stdev=736.81, samples=2 00:12:32.843 lat (usec) : 1000=0.04% 00:12:32.843 lat (msec) : 2=0.20%, 4=0.95%, 10=6.00%, 20=69.46%, 50=23.30% 00:12:32.843 lat (msec) : 100=0.04% 00:12:32.843 cpu : usr=2.38%, sys=3.57%, ctx=326, majf=0, minf=1 00:12:32.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:32.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.843 issued rwts: total=3584,3767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.843 job2: (groupid=0, jobs=1): err= 0: pid=569368: Tue Nov 5 12:27:01 2024 00:12:32.843 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:12:32.843 slat (usec): min=2, max=11678, avg=99.93, stdev=701.68 00:12:32.843 clat (usec): min=4507, max=24159, avg=12705.71, stdev=3273.76 00:12:32.843 lat (usec): min=4526, max=24176, avg=12805.64, stdev=3318.73 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[ 6194], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[10290], 00:12:32.843 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:12:32.843 | 70.00th=[13435], 80.00th=[14877], 90.00th=[17433], 95.00th=[19792], 00:12:32.843 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23725], 99.95th=[24249], 00:12:32.843 | 99.99th=[24249] 00:12:32.843 write: IOPS=5501, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1005msec); 0 zone resets 00:12:32.843 slat (usec): min=3, max=11242, avg=79.85, stdev=536.43 00:12:32.843 clat (usec): min=1102, max=25022, avg=11301.54, stdev=2471.15 00:12:32.843 lat (usec): min=1135, max=25046, avg=11381.40, stdev=2524.08 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[ 3523], 5.00th=[ 6521], 10.00th=[ 8029], 20.00th=[ 9896], 00:12:32.843 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11600], 60.00th=[12256], 00:12:32.843 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[14091], 00:12:32.843 | 99.00th=[16450], 99.50th=[18220], 99.90th=[23725], 99.95th=[23987], 00:12:32.843 | 99.99th=[25035] 00:12:32.843 bw ( KiB/s): min=20480, max=22728, per=35.48%, avg=21604.00, stdev=1589.58, samples=2 00:12:32.843 iops : min= 5120, max= 5682, avg=5401.00, stdev=397.39, samples=2 00:12:32.843 lat (msec) : 2=0.01%, 4=0.85%, 10=15.63%, 20=81.14%, 50=2.37% 00:12:32.843 cpu : usr=4.38%, sys=8.96%, ctx=510, majf=0, minf=2 00:12:32.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:32.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.843 issued rwts: total=5120,5529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.843 job3: (groupid=0, jobs=1): err= 0: pid=569371: Tue Nov 5 12:27:01 2024 00:12:32.843 read: IOPS=3540, BW=13.8MiB/s (14.5MB/s)(14.5MiB/1048msec) 00:12:32.843 slat (usec): min=2, max=12126, avg=120.16, stdev=725.78 00:12:32.843 clat (usec): min=6836, max=89934, avg=16142.45, stdev=10755.40 00:12:32.843 lat (usec): min=6840, max=89957, avg=16262.61, stdev=10807.29 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[ 7504], 5.00th=[10421], 10.00th=[11338], 20.00th=[12780], 00:12:32.843 | 30.00th=[13042], 40.00th=[13566], 50.00th=[14484], 60.00th=[14746], 00:12:32.843 | 70.00th=[15270], 80.00th=[16057], 90.00th=[17695], 95.00th=[19268], 00:12:32.843 | 99.00th=[78119], 99.50th=[84411], 99.90th=[89654], 99.95th=[89654], 00:12:32.843 | 99.99th=[89654] 00:12:32.843 write: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1048msec); 0 zone resets 00:12:32.843 slat (usec): min=3, max=12711, avg=129.16, stdev=685.51 00:12:32.843 clat (usec): min=1936, max=89961, avg=17718.13, stdev=12822.86 00:12:32.843 lat (usec): min=1944, max=89970, avg=17847.29, stdev=12914.13 00:12:32.843 clat percentiles (usec): 00:12:32.843 | 1.00th=[ 6718], 5.00th=[10028], 10.00th=[11338], 20.00th=[12649], 00:12:32.843 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:12:32.843 | 70.00th=[13960], 80.00th=[15401], 90.00th=[37487], 95.00th=[53740], 00:12:32.843 | 99.00th=[64750], 99.50th=[65274], 99.90th=[66323], 99.95th=[66323], 00:12:32.843 | 99.99th=[89654] 00:12:32.843 bw ( KiB/s): min=12792, max=19960, per=26.90%, avg=16376.00, stdev=5068.54, samples=2 00:12:32.843 iops : min= 3198, max= 4990, avg=4094.00, stdev=1267.14, samples=2 00:12:32.843 lat (msec) : 2=0.19%, 10=4.38%, 20=86.23%, 50=4.18%, 100=5.02% 00:12:32.843 cpu : usr=3.06%, sys=6.40%, ctx=412, majf=0, minf=1 00:12:32.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:32.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:32.843 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:32.843 00:12:32.843 Run status group 0 (all jobs): 00:12:32.843 READ: bw=54.1MiB/s (56.7MB/s), 8298KiB/s-19.9MiB/s (8497kB/s-20.9MB/s), io=56.7MiB (59.4MB), run=1005-1048msec 00:12:32.843 WRITE: bw=59.5MiB/s (62.3MB/s), 9.92MiB/s-21.5MiB/s (10.4MB/s-22.5MB/s), io=62.3MiB (65.3MB), run=1005-1048msec 00:12:32.843 00:12:32.843 Disk stats (read/write): 00:12:32.843 nvme0n1: ios=2034/2048, merge=0/0, ticks=17107/9549, in_queue=26656, util=86.77% 00:12:32.843 nvme0n2: ios=3122/3189, merge=0/0, ticks=28475/34371, in_queue=62846, util=92.08% 00:12:32.843 nvme0n3: ios=4241/4608, merge=0/0, ticks=53024/51561, in_queue=104585, util=91.15% 00:12:32.843 nvme0n4: ios=3641/3759, merge=0/0, ticks=28587/30017, in_queue=58604, util=95.69% 00:12:32.843 12:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:32.843 [global] 00:12:32.843 thread=1 00:12:32.843 invalidate=1 00:12:32.843 rw=randwrite 00:12:32.843 time_based=1 00:12:32.843 runtime=1 00:12:32.843 ioengine=libaio 00:12:32.843 direct=1 00:12:32.843 bs=4096 00:12:32.843 iodepth=128 00:12:32.843 norandommap=0 00:12:32.843 numjobs=1 00:12:32.843 00:12:32.843 verify_dump=1 00:12:32.843 verify_backlog=512 00:12:32.843 verify_state_save=0 00:12:32.843 do_verify=1 00:12:32.843 verify=crc32c-intel 00:12:32.843 [job0] 00:12:32.843 filename=/dev/nvme0n1 00:12:32.843 [job1] 00:12:32.844 filename=/dev/nvme0n2 00:12:32.844 [job2] 00:12:32.844 filename=/dev/nvme0n3 00:12:32.844 [job3] 00:12:32.844 filename=/dev/nvme0n4 00:12:32.844 Could not set queue depth (nvme0n1) 00:12:32.844 Could not set queue depth (nvme0n2) 00:12:32.844 Could not set queue depth (nvme0n3) 00:12:32.844 Could not set queue depth (nvme0n4) 00:12:32.844 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:32.844 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:32.844 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:32.844 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:32.844 fio-3.35 00:12:32.844 Starting 4 threads 00:12:34.216 00:12:34.216 job0: (groupid=0, jobs=1): err= 0: pid=569788: Tue Nov 5 12:27:03 2024 00:12:34.216 read: IOPS=4886, BW=19.1MiB/s (20.0MB/s)(19.9MiB/1044msec) 00:12:34.216 slat (usec): min=3, max=5780, avg=96.09, stdev=520.15 00:12:34.216 clat (usec): min=8013, max=53732, avg=13517.86, stdev=5872.57 00:12:34.216 lat (usec): min=8116, max=56241, avg=13613.95, stdev=5887.64 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10552], 20.00th=[11731], 00:12:34.216 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:12:34.216 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14746], 95.00th=[16319], 00:12:34.216 | 99.00th=[50070], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:12:34.216 | 99.99th=[53740] 00:12:34.216 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:12:34.216 slat (usec): min=3, max=5683, avg=87.79, stdev=433.23 00:12:34.216 clat (usec): min=5854, max=18094, avg=12323.96, stdev=1552.10 00:12:34.216 lat (usec): min=5862, max=18864, avg=12411.75, stdev=1580.63 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[ 7898], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:12:34.216 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:12:34.216 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[14746], 00:12:34.216 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:12:34.216 | 99.99th=[18220] 00:12:34.216 bw ( KiB/s): min=20360, max=20600, per=28.10%, avg=20480.00, stdev=169.71, samples=2 00:12:34.216 iops : min= 5090, max= 5150, avg=5120.00, stdev=42.43, samples=2 00:12:34.216 lat (msec) : 10=5.45%, 20=93.31%, 50=0.63%, 100=0.62% 00:12:34.216 cpu : usr=6.90%, sys=11.98%, ctx=504, majf=0, minf=1 00:12:34.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:34.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.216 issued rwts: total=5101,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.216 job1: (groupid=0, jobs=1): err= 0: pid=569789: Tue Nov 5 12:27:03 2024 00:12:34.216 read: IOPS=4472, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:12:34.216 slat (usec): min=3, max=22136, avg=105.45, stdev=598.21 00:12:34.216 clat (usec): min=968, max=43536, avg=13541.87, stdev=5114.02 00:12:34.216 lat (usec): min=3144, max=43555, avg=13647.32, stdev=5124.53 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[ 6783], 5.00th=[10552], 10.00th=[11076], 20.00th=[11994], 00:12:34.216 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:12:34.216 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14615], 00:12:34.216 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:12:34.216 | 99.99th=[43779] 00:12:34.216 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:12:34.216 slat (usec): min=4, max=14132, avg=101.40, stdev=585.24 00:12:34.216 clat (usec): min=9460, max=44531, avg=14267.51, stdev=7219.65 00:12:34.216 lat (usec): min=9495, max=44538, avg=14368.91, stdev=7254.33 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:12:34.216 | 30.00th=[11076], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:12:34.216 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13960], 95.00th=[35914], 00:12:34.216 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:12:34.216 | 99.99th=[44303] 00:12:34.216 bw ( KiB/s): min=16848, max=20016, per=25.29%, avg=18432.00, stdev=2240.11, samples=2 00:12:34.216 iops : min= 4212, max= 5004, avg=4608.00, stdev=560.03, samples=2 00:12:34.216 lat (usec) : 1000=0.01% 00:12:34.216 lat (msec) : 4=0.33%, 10=1.89%, 20=91.49%, 50=6.28% 00:12:34.216 cpu : usr=6.79%, sys=12.38%, ctx=416, majf=0, minf=1 00:12:34.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:34.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.216 issued rwts: total=4486,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.216 job2: (groupid=0, jobs=1): err= 0: pid=569790: Tue Nov 5 12:27:03 2024 00:12:34.216 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:34.216 slat (usec): min=3, max=4415, avg=103.31, stdev=543.18 00:12:34.216 clat (usec): min=9549, max=18539, avg=13926.51, stdev=1240.87 00:12:34.216 lat (usec): min=9568, max=18606, avg=14029.81, stdev=1255.03 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[10290], 5.00th=[11207], 10.00th=[12387], 20.00th=[13173], 00:12:34.216 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:12:34.216 | 70.00th=[14484], 80.00th=[14615], 90.00th=[15139], 95.00th=[15401], 00:12:34.216 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:12:34.216 | 99.99th=[18482] 00:12:34.216 write: IOPS=4681, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1001msec); 0 zone resets 00:12:34.216 slat (usec): min=4, max=4216, avg=99.04, stdev=484.93 00:12:34.216 clat (usec): min=625, max=17776, avg=13300.75, stdev=1371.91 00:12:34.216 lat (usec): min=660, max=17799, avg=13399.79, stdev=1414.27 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[12518], 20.00th=[13042], 00:12:34.216 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13566], 00:12:34.216 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14353], 00:12:34.216 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:12:34.216 | 99.99th=[17695] 00:12:34.216 bw ( KiB/s): min=16400, max=20464, per=25.29%, avg=18432.00, stdev=2873.68, samples=2 00:12:34.216 iops : min= 4100, max= 5116, avg=4608.00, stdev=718.42, samples=2 00:12:34.216 lat (usec) : 750=0.02% 00:12:34.216 lat (msec) : 10=1.46%, 20=98.52% 00:12:34.216 cpu : usr=6.40%, sys=13.00%, ctx=396, majf=0, minf=2 00:12:34.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:34.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.216 issued rwts: total=4608,4686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.216 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.216 job3: (groupid=0, jobs=1): err= 0: pid=569791: Tue Nov 5 12:27:03 2024 00:12:34.216 read: IOPS=4323, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1006msec) 00:12:34.216 slat (usec): min=2, max=13985, avg=121.90, stdev=892.63 00:12:34.216 clat (usec): min=1860, max=28619, avg=15339.85, stdev=3797.55 00:12:34.216 lat (usec): min=4787, max=29365, avg=15461.75, stdev=3847.90 00:12:34.216 clat percentiles (usec): 00:12:34.216 | 1.00th=[ 5276], 5.00th=[ 9241], 10.00th=[13042], 20.00th=[13698], 00:12:34.216 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14091], 60.00th=[14615], 00:12:34.216 | 70.00th=[15795], 80.00th=[17695], 90.00th=[21103], 95.00th=[23462], 00:12:34.217 | 99.00th=[26346], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 00:12:34.217 | 99.99th=[28705] 00:12:34.217 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:12:34.217 slat (usec): min=3, max=11324, avg=87.28, stdev=497.84 00:12:34.217 clat (usec): min=941, max=28120, avg=13096.87, stdev=3330.84 00:12:34.217 lat (usec): min=1470, max=28129, avg=13184.15, stdev=3385.79 00:12:34.217 clat percentiles (usec): 00:12:34.217 | 1.00th=[ 3621], 5.00th=[ 5800], 10.00th=[ 7439], 20.00th=[10683], 00:12:34.217 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14222], 60.00th=[14353], 00:12:34.217 | 70.00th=[14746], 80.00th=[15008], 90.00th=[16057], 95.00th=[16909], 00:12:34.217 | 99.00th=[17171], 99.50th=[21365], 99.90th=[27132], 99.95th=[27395], 00:12:34.217 | 99.99th=[28181] 00:12:34.217 bw ( KiB/s): min=18096, max=18768, per=25.29%, avg=18432.00, stdev=475.18, samples=2 00:12:34.217 iops : min= 4524, max= 4692, avg=4608.00, stdev=118.79, samples=2 00:12:34.217 lat (usec) : 1000=0.01% 00:12:34.217 lat (msec) : 2=0.08%, 4=0.92%, 10=10.67%, 20=82.28%, 50=6.04% 00:12:34.217 cpu : usr=3.58%, sys=6.47%, ctx=493, majf=0, minf=1 00:12:34.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:34.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:34.217 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:34.217 00:12:34.217 Run status group 0 (all jobs): 00:12:34.217 READ: bw=69.4MiB/s (72.8MB/s), 16.9MiB/s-19.1MiB/s (17.7MB/s-20.0MB/s), io=72.4MiB (76.0MB), run=1001-1044msec 00:12:34.217 WRITE: bw=71.2MiB/s (74.6MB/s), 17.9MiB/s-19.2MiB/s (18.8MB/s-20.1MB/s), io=74.3MiB (77.9MB), run=1001-1044msec 00:12:34.217 00:12:34.217 Disk stats (read/write): 00:12:34.217 nvme0n1: ios=4146/4551, merge=0/0, ticks=24959/25282, in_queue=50241, util=86.57% 00:12:34.217 nvme0n2: ios=3744/4096, merge=0/0, ticks=12131/13195, in_queue=25326, util=98.37% 00:12:34.217 nvme0n3: ios=3852/4096, merge=0/0, ticks=16750/16176, in_queue=32926, util=90.50% 00:12:34.217 nvme0n4: ios=3624/3807, merge=0/0, ticks=52626/45903, in_queue=98529, util=98.52% 00:12:34.217 12:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:34.217 12:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=569927 00:12:34.217 12:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:34.217 12:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:34.217 [global] 00:12:34.217 thread=1 00:12:34.217 invalidate=1 00:12:34.217 rw=read 00:12:34.217 time_based=1 00:12:34.217 runtime=10 00:12:34.217 ioengine=libaio 00:12:34.217 direct=1 00:12:34.217 bs=4096 00:12:34.217 iodepth=1 00:12:34.217 norandommap=1 00:12:34.217 numjobs=1 00:12:34.217 00:12:34.217 [job0] 00:12:34.217 filename=/dev/nvme0n1 00:12:34.217 [job1] 00:12:34.217 filename=/dev/nvme0n2 00:12:34.217 [job2] 00:12:34.217 filename=/dev/nvme0n3 00:12:34.217 [job3] 00:12:34.217 filename=/dev/nvme0n4 00:12:34.217 Could not set queue depth (nvme0n1) 00:12:34.217 Could not set queue depth (nvme0n2) 00:12:34.217 Could not set queue depth (nvme0n3) 00:12:34.217 Could not set queue depth (nvme0n4) 00:12:34.474 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.474 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.474 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.474 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:34.474 fio-3.35 00:12:34.474 Starting 4 threads 00:12:37.754 12:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:37.754 12:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:37.754 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=851968, buflen=4096 00:12:37.754 fio: pid=570026, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:37.754 12:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:37.754 12:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:37.754 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2801664, buflen=4096 00:12:37.754 fio: pid=570025, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:38.012 12:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.012 12:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:38.012 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=17641472, buflen=4096 00:12:38.012 fio: pid=570022, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:38.578 12:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.578 12:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:38.578 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=29151232, buflen=4096 00:12:38.578 fio: pid=570023, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:38.578 00:12:38.578 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=570022: Tue Nov 5 12:27:07 2024 00:12:38.578 read: IOPS=1208, BW=4834KiB/s (4950kB/s)(16.8MiB/3564msec) 00:12:38.578 slat (usec): min=5, max=12964, avg=16.55, stdev=197.52 00:12:38.578 clat (usec): min=161, max=42004, avg=802.47, stdev=4754.05 00:12:38.578 lat (usec): min=167, max=54000, avg=819.03, stdev=4784.37 00:12:38.578 clat percentiles (usec): 00:12:38.578 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 190], 00:12:38.578 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 233], 00:12:38.578 | 70.00th=[ 247], 80.00th=[ 281], 90.00th=[ 347], 95.00th=[ 375], 00:12:38.578 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:12:38.578 | 99.99th=[42206] 00:12:38.578 bw ( KiB/s): min= 96, max=15064, per=44.71%, avg=5725.33, stdev=5875.69, samples=6 00:12:38.578 iops : min= 24, max= 3766, avg=1431.33, stdev=1468.92, samples=6 00:12:38.578 lat (usec) : 250=71.29%, 500=27.04%, 750=0.26% 00:12:38.578 lat (msec) : 50=1.39% 00:12:38.578 cpu : usr=0.65%, sys=1.91%, ctx=4312, majf=0, minf=2 00:12:38.578 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.578 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 issued rwts: total=4308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.579 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=570023: Tue Nov 5 12:27:07 2024 00:12:38.579 read: IOPS=1850, BW=7400KiB/s (7578kB/s)(27.8MiB/3847msec) 00:12:38.579 slat (usec): min=4, max=28830, avg=16.34, stdev=354.10 00:12:38.579 clat (usec): min=159, max=50127, avg=518.29, stdev=3548.40 00:12:38.579 lat (usec): min=165, max=70940, avg=534.62, stdev=3626.01 00:12:38.579 clat percentiles (usec): 00:12:38.579 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:12:38.579 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:12:38.579 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 302], 00:12:38.579 | 99.00th=[ 453], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:12:38.579 | 99.99th=[50070] 00:12:38.579 bw ( KiB/s): min= 86, max=20256, per=63.44%, avg=8124.29, stdev=9074.53, samples=7 00:12:38.579 iops : min= 21, max= 5064, avg=2031.00, stdev=2268.71, samples=7 00:12:38.579 lat (usec) : 250=86.86%, 500=12.29%, 750=0.06%, 1000=0.01% 00:12:38.579 lat (msec) : 4=0.01%, 50=0.73%, 100=0.01% 00:12:38.579 cpu : usr=0.68%, sys=2.65%, ctx=7122, majf=0, minf=1 00:12:38.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 issued rwts: total=7118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.579 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=570025: Tue Nov 5 12:27:07 2024 00:12:38.579 read: IOPS=209, BW=839KiB/s (859kB/s)(2736KiB/3262msec) 00:12:38.579 slat (nsec): min=5551, max=34334, avg=10390.57, stdev=6352.41 00:12:38.579 clat (usec): min=172, max=42161, avg=4722.08, stdev=12856.29 00:12:38.579 lat (usec): min=178, max=42180, avg=4732.47, stdev=12860.22 00:12:38.579 clat percentiles (usec): 00:12:38.579 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:12:38.579 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:12:38.579 | 70.00th=[ 215], 80.00th=[ 235], 90.00th=[41157], 95.00th=[41157], 00:12:38.579 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:38.579 | 99.99th=[42206] 00:12:38.579 bw ( KiB/s): min= 96, max= 4688, per=7.06%, avg=904.00, stdev=1854.74, samples=6 00:12:38.579 iops : min= 24, max= 1172, avg=226.00, stdev=463.68, samples=6 00:12:38.579 lat (usec) : 250=81.61%, 500=6.57%, 750=0.58%, 1000=0.15% 00:12:38.579 lat (msec) : 50=10.95% 00:12:38.579 cpu : usr=0.28%, sys=0.12%, ctx=685, majf=0, minf=2 00:12:38.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 issued rwts: total=685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.579 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=570026: Tue Nov 5 12:27:07 2024 00:12:38.579 read: IOPS=70, BW=282KiB/s (289kB/s)(832KiB/2948msec) 00:12:38.579 slat (nsec): min=7782, max=71037, avg=22725.09, stdev=8901.19 00:12:38.579 clat (usec): min=252, max=43061, avg=14028.30, stdev=19183.43 00:12:38.579 lat (usec): min=274, max=43077, avg=14050.96, stdev=19180.92 00:12:38.579 clat percentiles (usec): 00:12:38.579 | 1.00th=[ 260], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 322], 00:12:38.579 | 30.00th=[ 363], 40.00th=[ 400], 50.00th=[ 461], 60.00th=[ 562], 00:12:38.579 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:12:38.579 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:12:38.579 | 99.99th=[43254] 00:12:38.579 bw ( KiB/s): min= 208, max= 376, per=2.27%, avg=291.20, stdev=64.65, samples=5 00:12:38.579 iops : min= 52, max= 94, avg=72.80, stdev=16.16, samples=5 00:12:38.579 lat (usec) : 500=55.98%, 750=9.09%, 1000=0.96% 00:12:38.579 lat (msec) : 50=33.49% 00:12:38.579 cpu : usr=0.00%, sys=0.34%, ctx=209, majf=0, minf=1 00:12:38.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:38.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:38.579 issued rwts: total=209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:38.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:38.579 00:12:38.579 Run status group 0 (all jobs): 00:12:38.579 READ: bw=12.5MiB/s (13.1MB/s), 282KiB/s-7400KiB/s (289kB/s-7578kB/s), io=48.1MiB (50.4MB), run=2948-3847msec 00:12:38.579 00:12:38.579 Disk stats (read/write): 00:12:38.579 nvme0n1: ios=4347/0, merge=0/0, ticks=4145/0, in_queue=4145, util=99.43% 00:12:38.579 nvme0n2: ios=7151/0, merge=0/0, ticks=3579/0, in_queue=3579, util=98.85% 00:12:38.579 nvme0n3: ios=680/0, merge=0/0, ticks=3069/0, in_queue=3069, util=96.79% 00:12:38.579 nvme0n4: ios=247/0, merge=0/0, ticks=3006/0, in_queue=3006, util=100.00% 00:12:38.579 12:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:38.579 12:27:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:39.144 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:39.144 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:39.402 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:39.402 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:39.660 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:39.660 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:39.918 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:39.918 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 569927 00:12:39.918 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:39.918 12:27:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.918 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.918 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:12:39.918 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:39.918 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.919 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:39.919 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.919 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:12:39.919 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:39.919 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:39.919 nvmf hotplug test: fio failed as expected 00:12:39.919 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.177 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.177 rmmod nvme_tcp 00:12:40.436 rmmod nvme_fabrics 00:12:40.436 rmmod nvme_keyring 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 567775 ']' 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 567775 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 567775 ']' 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 567775 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 567775 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 567775' 00:12:40.436 killing process with pid 567775 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 567775 00:12:40.436 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 567775 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.720 12:27:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.629 00:12:42.629 real 0m24.426s 00:12:42.629 user 1m25.046s 00:12:42.629 sys 0m7.614s 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.629 ************************************ 00:12:42.629 END TEST nvmf_fio_target 00:12:42.629 ************************************ 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:42.629 ************************************ 00:12:42.629 START TEST nvmf_bdevio 00:12:42.629 ************************************ 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:42.629 * Looking for test storage... 00:12:42.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:12:42.629 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:42.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.888 --rc genhtml_branch_coverage=1 00:12:42.888 --rc genhtml_function_coverage=1 00:12:42.888 --rc genhtml_legend=1 00:12:42.888 --rc geninfo_all_blocks=1 00:12:42.888 --rc geninfo_unexecuted_blocks=1 00:12:42.888 00:12:42.888 ' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:42.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.888 --rc genhtml_branch_coverage=1 00:12:42.888 --rc genhtml_function_coverage=1 00:12:42.888 --rc genhtml_legend=1 00:12:42.888 --rc geninfo_all_blocks=1 00:12:42.888 --rc geninfo_unexecuted_blocks=1 00:12:42.888 00:12:42.888 ' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:42.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.888 --rc genhtml_branch_coverage=1 00:12:42.888 --rc genhtml_function_coverage=1 00:12:42.888 --rc genhtml_legend=1 00:12:42.888 --rc geninfo_all_blocks=1 00:12:42.888 --rc geninfo_unexecuted_blocks=1 00:12:42.888 00:12:42.888 ' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:42.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.888 --rc genhtml_branch_coverage=1 00:12:42.888 --rc genhtml_function_coverage=1 00:12:42.888 --rc genhtml_legend=1 00:12:42.888 --rc geninfo_all_blocks=1 00:12:42.888 --rc geninfo_unexecuted_blocks=1 00:12:42.888 00:12:42.888 ' 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.888 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.889 12:27:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:45.426 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:45.426 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:45.426 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:45.426 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.426 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:12:45.427 00:12:45.427 --- 10.0.0.2 ping statistics --- 00:12:45.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.427 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:45.427 00:12:45.427 --- 10.0.0.1 ping statistics --- 00:12:45.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.427 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=573172 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 573172 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 573172 ']' 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 [2024-11-05 12:27:14.275643] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:12:45.427 [2024-11-05 12:27:14.275736] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.427 [2024-11-05 12:27:14.354827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.427 [2024-11-05 12:27:14.403051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.427 [2024-11-05 12:27:14.403108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.427 [2024-11-05 12:27:14.403122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.427 [2024-11-05 12:27:14.403133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.427 [2024-11-05 12:27:14.403148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.427 [2024-11-05 12:27:14.404683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:45.427 [2024-11-05 12:27:14.404714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:45.427 [2024-11-05 12:27:14.404785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:45.427 [2024-11-05 12:27:14.404788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 [2024-11-05 12:27:14.553218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 Malloc0 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:45.427 [2024-11-05 12:27:14.615902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:45.427 { 00:12:45.427 "params": { 00:12:45.427 "name": "Nvme$subsystem", 00:12:45.427 "trtype": "$TEST_TRANSPORT", 00:12:45.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.427 "adrfam": "ipv4", 00:12:45.427 "trsvcid": "$NVMF_PORT", 00:12:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.427 "hdgst": ${hdgst:-false}, 00:12:45.427 "ddgst": ${ddgst:-false} 00:12:45.427 }, 00:12:45.427 "method": "bdev_nvme_attach_controller" 00:12:45.427 } 00:12:45.427 EOF 00:12:45.427 )") 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:45.427 12:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:45.427 "params": { 00:12:45.427 "name": "Nvme1", 00:12:45.427 "trtype": "tcp", 00:12:45.427 "traddr": "10.0.0.2", 00:12:45.427 "adrfam": "ipv4", 00:12:45.427 "trsvcid": "4420", 00:12:45.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.427 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.427 "hdgst": false, 00:12:45.427 "ddgst": false 00:12:45.427 }, 00:12:45.427 "method": "bdev_nvme_attach_controller" 00:12:45.427 }' 00:12:45.427 [2024-11-05 12:27:14.664460] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:12:45.427 [2024-11-05 12:27:14.664547] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573314 ] 00:12:45.686 [2024-11-05 12:27:14.734537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:45.686 [2024-11-05 12:27:14.784350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.686 [2024-11-05 12:27:14.784403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.686 [2024-11-05 12:27:14.784407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.943 I/O targets: 00:12:45.943 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:45.943 00:12:45.943 00:12:45.943 CUnit - A unit testing framework for C - Version 2.1-3 00:12:45.943 http://cunit.sourceforge.net/ 00:12:45.944 00:12:45.944 00:12:45.944 Suite: bdevio tests on: Nvme1n1 00:12:45.944 Test: blockdev write read block ...passed 00:12:45.944 Test: blockdev write zeroes read block ...passed 00:12:45.944 Test: blockdev write zeroes read no split ...passed 00:12:45.944 Test: blockdev write zeroes read split ...passed 00:12:45.944 Test: blockdev write zeroes read split partial ...passed 00:12:45.944 Test: blockdev reset ...[2024-11-05 12:27:15.122718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:45.944 [2024-11-05 12:27:15.122828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6aac0 (9): Bad file descriptor 00:12:45.944 [2024-11-05 12:27:15.137498] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:45.944 passed 00:12:45.944 Test: blockdev write read 8 blocks ...passed 00:12:45.944 Test: blockdev write read size > 128k ...passed 00:12:45.944 Test: blockdev write read invalid size ...passed 00:12:45.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:45.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:45.944 Test: blockdev write read max offset ...passed 00:12:46.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:46.201 Test: blockdev writev readv 8 blocks ...passed 00:12:46.201 Test: blockdev writev readv 30 x 1block ...passed 00:12:46.201 Test: blockdev writev readv block ...passed 00:12:46.201 Test: blockdev writev readv size > 128k ...passed 00:12:46.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:46.201 Test: blockdev comparev and writev ...[2024-11-05 12:27:15.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.309197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.309222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.309239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.309582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.309605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.309627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.309644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.309997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.310021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.310043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.310059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.310391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.310417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.310454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:46.201 [2024-11-05 12:27:15.310471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:46.201 passed 00:12:46.201 Test: blockdev nvme passthru rw ...passed 00:12:46.201 Test: blockdev nvme passthru vendor specific ...[2024-11-05 12:27:15.394099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.201 [2024-11-05 12:27:15.394128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.394264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.201 [2024-11-05 12:27:15.394287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.394422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.201 [2024-11-05 12:27:15.394446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:46.201 [2024-11-05 12:27:15.394590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:46.201 [2024-11-05 12:27:15.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:46.201 passed 00:12:46.201 Test: blockdev nvme admin passthru ...passed 00:12:46.459 Test: blockdev copy ...passed 00:12:46.459 00:12:46.459 Run Summary: Type Total Ran Passed Failed Inactive 00:12:46.459 suites 1 1 n/a 0 0 00:12:46.459 tests 23 23 23 0 0 00:12:46.459 asserts 152 152 152 0 n/a 00:12:46.459 00:12:46.460 Elapsed time = 0.976 seconds 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.460 rmmod nvme_tcp 00:12:46.460 rmmod nvme_fabrics 00:12:46.460 rmmod nvme_keyring 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 573172 ']' 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 573172 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 573172 ']' 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 573172 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:46.460 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 573172 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 573172' 00:12:46.719 killing process with pid 573172 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 573172 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 573172 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.719 12:27:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.257 12:27:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:49.257 00:12:49.257 real 0m6.217s 00:12:49.257 user 0m8.917s 00:12:49.257 sys 0m2.139s 00:12:49.257 12:27:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.257 12:27:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:49.257 ************************************ 00:12:49.257 END TEST nvmf_bdevio 00:12:49.257 ************************************ 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:49.257 00:12:49.257 real 3m54.761s 00:12:49.257 user 10m10.953s 00:12:49.257 sys 1m8.015s 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:49.257 ************************************ 00:12:49.257 END TEST nvmf_target_core 00:12:49.257 ************************************ 00:12:49.257 12:27:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:49.257 12:27:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:49.257 12:27:18 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.257 12:27:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.257 ************************************ 00:12:49.257 START TEST nvmf_target_extra 00:12:49.257 ************************************ 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:49.257 * Looking for test storage... 00:12:49.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.257 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.258 --rc genhtml_branch_coverage=1 00:12:49.258 --rc genhtml_function_coverage=1 00:12:49.258 --rc genhtml_legend=1 00:12:49.258 --rc geninfo_all_blocks=1 00:12:49.258 --rc geninfo_unexecuted_blocks=1 00:12:49.258 00:12:49.258 ' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.258 --rc genhtml_branch_coverage=1 00:12:49.258 --rc genhtml_function_coverage=1 00:12:49.258 --rc genhtml_legend=1 00:12:49.258 --rc geninfo_all_blocks=1 00:12:49.258 --rc geninfo_unexecuted_blocks=1 00:12:49.258 00:12:49.258 ' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.258 --rc genhtml_branch_coverage=1 00:12:49.258 --rc genhtml_function_coverage=1 00:12:49.258 --rc genhtml_legend=1 00:12:49.258 --rc geninfo_all_blocks=1 00:12:49.258 --rc geninfo_unexecuted_blocks=1 00:12:49.258 00:12:49.258 ' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:49.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.258 --rc genhtml_branch_coverage=1 00:12:49.258 --rc genhtml_function_coverage=1 00:12:49.258 --rc genhtml_legend=1 00:12:49.258 --rc geninfo_all_blocks=1 00:12:49.258 --rc geninfo_unexecuted_blocks=1 00:12:49.258 00:12:49.258 ' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:49.258 ************************************ 00:12:49.258 START TEST nvmf_example 00:12:49.258 ************************************ 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:49.258 * Looking for test storage... 00:12:49.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:49.258 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:49.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.259 --rc genhtml_branch_coverage=1 00:12:49.259 --rc genhtml_function_coverage=1 00:12:49.259 --rc genhtml_legend=1 00:12:49.259 --rc geninfo_all_blocks=1 00:12:49.259 --rc geninfo_unexecuted_blocks=1 00:12:49.259 00:12:49.259 ' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:49.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.259 --rc genhtml_branch_coverage=1 00:12:49.259 --rc genhtml_function_coverage=1 00:12:49.259 --rc genhtml_legend=1 00:12:49.259 --rc geninfo_all_blocks=1 00:12:49.259 --rc geninfo_unexecuted_blocks=1 00:12:49.259 00:12:49.259 ' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:49.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.259 --rc genhtml_branch_coverage=1 00:12:49.259 --rc genhtml_function_coverage=1 00:12:49.259 --rc genhtml_legend=1 00:12:49.259 --rc geninfo_all_blocks=1 00:12:49.259 --rc geninfo_unexecuted_blocks=1 00:12:49.259 00:12:49.259 ' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:49.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:49.259 --rc genhtml_branch_coverage=1 00:12:49.259 --rc genhtml_function_coverage=1 00:12:49.259 --rc genhtml_legend=1 00:12:49.259 --rc geninfo_all_blocks=1 00:12:49.259 --rc geninfo_unexecuted_blocks=1 00:12:49.259 00:12:49.259 ' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:49.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:49.259 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:49.260 12:27:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:51.791 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.792 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.792 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.792 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.792 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:12:51.792 00:12:51.792 --- 10.0.0.2 ping statistics --- 00:12:51.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.792 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:51.792 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:12:51.792 00:12:51.792 --- 10.0.0.1 ping statistics --- 00:12:51.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.792 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=575451 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 575451 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 575451 ']' 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.793 12:27:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:52.051 12:27:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:04.249 Initializing NVMe Controllers 00:13:04.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:04.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:04.249 Initialization complete. Launching workers. 00:13:04.249 ======================================================== 00:13:04.249 Latency(us) 00:13:04.249 Device Information : IOPS MiB/s Average min max 00:13:04.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14901.52 58.21 4294.49 875.66 15335.97 00:13:04.249 ======================================================== 00:13:04.249 Total : 14901.52 58.21 4294.49 875.66 15335.97 00:13:04.249 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.249 rmmod nvme_tcp 00:13:04.249 rmmod nvme_fabrics 00:13:04.249 rmmod nvme_keyring 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 575451 ']' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 575451 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 575451 ']' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 575451 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 575451 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 575451' 00:13:04.249 killing process with pid 575451 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 575451 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 575451 00:13:04.249 nvmf threads initialize successfully 00:13:04.249 bdev subsystem init successfully 00:13:04.249 created a nvmf target service 00:13:04.249 create targets's poll groups done 00:13:04.249 all subsystems of target started 00:13:04.249 nvmf target is running 00:13:04.249 all subsystems of target stopped 00:13:04.249 destroy targets's poll groups done 00:13:04.249 destroyed the nvmf target service 00:13:04.249 bdev subsystem finish successfully 00:13:04.249 nvmf threads destroy successfully 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.249 12:27:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 00:13:04.508 real 0m15.399s 00:13:04.508 user 0m42.036s 00:13:04.508 sys 0m3.382s 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 ************************************ 00:13:04.508 END TEST nvmf_example 00:13:04.508 ************************************ 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 ************************************ 00:13:04.508 START TEST nvmf_filesystem 00:13:04.508 ************************************ 00:13:04.508 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:04.770 * Looking for test storage... 00:13:04.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.770 --rc genhtml_branch_coverage=1 00:13:04.770 --rc genhtml_function_coverage=1 00:13:04.770 --rc genhtml_legend=1 00:13:04.770 --rc geninfo_all_blocks=1 00:13:04.770 --rc geninfo_unexecuted_blocks=1 00:13:04.770 00:13:04.770 ' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.770 --rc genhtml_branch_coverage=1 00:13:04.770 --rc genhtml_function_coverage=1 00:13:04.770 --rc genhtml_legend=1 00:13:04.770 --rc geninfo_all_blocks=1 00:13:04.770 --rc geninfo_unexecuted_blocks=1 00:13:04.770 00:13:04.770 ' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.770 --rc genhtml_branch_coverage=1 00:13:04.770 --rc genhtml_function_coverage=1 00:13:04.770 --rc genhtml_legend=1 00:13:04.770 --rc geninfo_all_blocks=1 00:13:04.770 --rc geninfo_unexecuted_blocks=1 00:13:04.770 00:13:04.770 ' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:04.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.770 --rc genhtml_branch_coverage=1 00:13:04.770 --rc genhtml_function_coverage=1 00:13:04.770 --rc genhtml_legend=1 00:13:04.770 --rc geninfo_all_blocks=1 00:13:04.770 --rc geninfo_unexecuted_blocks=1 00:13:04.770 00:13:04.770 ' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:04.770 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:04.771 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:04.771 #define SPDK_CONFIG_H 00:13:04.771 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:04.771 #define SPDK_CONFIG_APPS 1 00:13:04.771 #define SPDK_CONFIG_ARCH native 00:13:04.771 #undef SPDK_CONFIG_ASAN 00:13:04.771 #undef SPDK_CONFIG_AVAHI 00:13:04.771 #undef SPDK_CONFIG_CET 00:13:04.771 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:04.771 #define SPDK_CONFIG_COVERAGE 1 00:13:04.771 #define SPDK_CONFIG_CROSS_PREFIX 00:13:04.771 #undef SPDK_CONFIG_CRYPTO 00:13:04.771 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:04.771 #undef SPDK_CONFIG_CUSTOMOCF 00:13:04.771 #undef SPDK_CONFIG_DAOS 00:13:04.771 #define SPDK_CONFIG_DAOS_DIR 00:13:04.771 #define SPDK_CONFIG_DEBUG 1 00:13:04.771 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:04.771 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:04.771 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:13:04.771 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:04.771 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:04.771 #undef SPDK_CONFIG_DPDK_UADK 00:13:04.771 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:04.771 #define SPDK_CONFIG_EXAMPLES 1 00:13:04.771 #undef SPDK_CONFIG_FC 00:13:04.771 #define SPDK_CONFIG_FC_PATH 00:13:04.771 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:04.771 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:04.771 #define SPDK_CONFIG_FSDEV 1 00:13:04.771 #undef SPDK_CONFIG_FUSE 00:13:04.771 #undef SPDK_CONFIG_FUZZER 00:13:04.771 #define SPDK_CONFIG_FUZZER_LIB 00:13:04.771 #undef SPDK_CONFIG_GOLANG 00:13:04.771 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:04.771 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:04.771 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:04.771 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:04.772 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:04.772 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:04.772 #undef SPDK_CONFIG_HAVE_LZ4 00:13:04.772 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:04.772 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:04.772 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:04.772 #define SPDK_CONFIG_IDXD 1 00:13:04.772 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:04.772 #undef SPDK_CONFIG_IPSEC_MB 00:13:04.772 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:04.772 #define SPDK_CONFIG_ISAL 1 00:13:04.772 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:04.772 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:04.772 #define SPDK_CONFIG_LIBDIR 00:13:04.772 #undef SPDK_CONFIG_LTO 00:13:04.772 #define SPDK_CONFIG_MAX_LCORES 128 00:13:04.772 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:04.772 #define SPDK_CONFIG_NVME_CUSE 1 00:13:04.772 #undef SPDK_CONFIG_OCF 00:13:04.772 #define SPDK_CONFIG_OCF_PATH 00:13:04.772 #define SPDK_CONFIG_OPENSSL_PATH 00:13:04.772 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:04.772 #define SPDK_CONFIG_PGO_DIR 00:13:04.772 #undef SPDK_CONFIG_PGO_USE 00:13:04.772 #define SPDK_CONFIG_PREFIX /usr/local 00:13:04.772 #undef SPDK_CONFIG_RAID5F 00:13:04.772 #undef SPDK_CONFIG_RBD 00:13:04.772 #define SPDK_CONFIG_RDMA 1 00:13:04.772 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:04.772 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:04.772 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:04.772 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:04.772 #define SPDK_CONFIG_SHARED 1 00:13:04.772 #undef SPDK_CONFIG_SMA 00:13:04.772 #define SPDK_CONFIG_TESTS 1 00:13:04.772 #undef SPDK_CONFIG_TSAN 00:13:04.772 #define SPDK_CONFIG_UBLK 1 00:13:04.772 #define SPDK_CONFIG_UBSAN 1 00:13:04.772 #undef SPDK_CONFIG_UNIT_TESTS 00:13:04.772 #undef SPDK_CONFIG_URING 00:13:04.772 #define SPDK_CONFIG_URING_PATH 00:13:04.772 #undef SPDK_CONFIG_URING_ZNS 00:13:04.772 #undef SPDK_CONFIG_USDT 00:13:04.772 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:04.772 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:04.772 #define SPDK_CONFIG_VFIO_USER 1 00:13:04.772 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:04.772 #define SPDK_CONFIG_VHOST 1 00:13:04.772 #define SPDK_CONFIG_VIRTIO 1 00:13:04.772 #undef SPDK_CONFIG_VTUNE 00:13:04.772 #define SPDK_CONFIG_VTUNE_DIR 00:13:04.772 #define SPDK_CONFIG_WERROR 1 00:13:04.772 #define SPDK_CONFIG_WPDK_DIR 00:13:04.772 #undef SPDK_CONFIG_XNVME 00:13:04.772 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:04.772 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:04.773 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:04.774 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 577132 ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 577132 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.G3shbO 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.G3shbO/tests/target /tmp/spdk.G3shbO 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=53518753792 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988532224 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8469778432 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984232960 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375277568 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22429696 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993928192 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994268160 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=339968 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:04.775 * Looking for test storage... 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=53518753792 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10684370944 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:04.775 12:27:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:05.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.035 --rc genhtml_branch_coverage=1 00:13:05.035 --rc genhtml_function_coverage=1 00:13:05.035 --rc genhtml_legend=1 00:13:05.035 --rc geninfo_all_blocks=1 00:13:05.035 --rc geninfo_unexecuted_blocks=1 00:13:05.035 00:13:05.035 ' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:05.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.035 --rc genhtml_branch_coverage=1 00:13:05.035 --rc genhtml_function_coverage=1 00:13:05.035 --rc genhtml_legend=1 00:13:05.035 --rc geninfo_all_blocks=1 00:13:05.035 --rc geninfo_unexecuted_blocks=1 00:13:05.035 00:13:05.035 ' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:05.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.035 --rc genhtml_branch_coverage=1 00:13:05.035 --rc genhtml_function_coverage=1 00:13:05.035 --rc genhtml_legend=1 00:13:05.035 --rc geninfo_all_blocks=1 00:13:05.035 --rc geninfo_unexecuted_blocks=1 00:13:05.035 00:13:05.035 ' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:05.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:05.035 --rc genhtml_branch_coverage=1 00:13:05.035 --rc genhtml_function_coverage=1 00:13:05.035 --rc genhtml_legend=1 00:13:05.035 --rc geninfo_all_blocks=1 00:13:05.035 --rc geninfo_unexecuted_blocks=1 00:13:05.035 00:13:05.035 ' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.035 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:05.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:05.036 12:27:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.572 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:07.573 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:07.573 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:07.573 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:07.573 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:13:07.573 00:13:07.573 --- 10.0.0.2 ping statistics --- 00:13:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.573 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:13:07.573 00:13:07.573 --- 10.0.0.1 ping statistics --- 00:13:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.573 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:07.573 ************************************ 00:13:07.573 START TEST nvmf_filesystem_no_in_capsule 00:13:07.573 ************************************ 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:07.573 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=578791 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 578791 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 578791 ']' 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.574 [2024-11-05 12:27:36.535536] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:13:07.574 [2024-11-05 12:27:36.535612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.574 [2024-11-05 12:27:36.612567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.574 [2024-11-05 12:27:36.658556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.574 [2024-11-05 12:27:36.658611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.574 [2024-11-05 12:27:36.658625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.574 [2024-11-05 12:27:36.658637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.574 [2024-11-05 12:27:36.658646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.574 [2024-11-05 12:27:36.660060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.574 [2024-11-05 12:27:36.660084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.574 [2024-11-05 12:27:36.660150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.574 [2024-11-05 12:27:36.660154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.574 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.574 [2024-11-05 12:27:36.811704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.832 Malloc1 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.832 [2024-11-05 12:27:36.991586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.832 12:27:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.832 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.832 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:07.832 { 00:13:07.832 "name": "Malloc1", 00:13:07.832 "aliases": [ 00:13:07.832 "0e5dcba3-77f5-4b43-b4d8-a9d4fd33a4b9" 00:13:07.832 ], 00:13:07.832 "product_name": "Malloc disk", 00:13:07.832 "block_size": 512, 00:13:07.832 "num_blocks": 1048576, 00:13:07.832 "uuid": "0e5dcba3-77f5-4b43-b4d8-a9d4fd33a4b9", 00:13:07.832 "assigned_rate_limits": { 00:13:07.832 "rw_ios_per_sec": 0, 00:13:07.832 "rw_mbytes_per_sec": 0, 00:13:07.832 "r_mbytes_per_sec": 0, 00:13:07.832 "w_mbytes_per_sec": 0 00:13:07.832 }, 00:13:07.832 "claimed": true, 00:13:07.832 "claim_type": "exclusive_write", 00:13:07.832 "zoned": false, 00:13:07.832 "supported_io_types": { 00:13:07.832 "read": true, 00:13:07.832 "write": true, 00:13:07.832 "unmap": true, 00:13:07.832 "flush": true, 00:13:07.832 "reset": true, 00:13:07.832 "nvme_admin": false, 00:13:07.832 "nvme_io": false, 00:13:07.832 "nvme_io_md": false, 00:13:07.832 "write_zeroes": true, 00:13:07.832 "zcopy": true, 00:13:07.832 "get_zone_info": false, 00:13:07.832 "zone_management": false, 00:13:07.832 "zone_append": false, 00:13:07.832 "compare": false, 00:13:07.832 "compare_and_write": false, 00:13:07.832 "abort": true, 00:13:07.832 "seek_hole": false, 00:13:07.832 "seek_data": false, 00:13:07.832 "copy": true, 00:13:07.832 "nvme_iov_md": false 00:13:07.832 }, 00:13:07.832 "memory_domains": [ 00:13:07.832 { 00:13:07.832 "dma_device_id": "system", 00:13:07.832 "dma_device_type": 1 00:13:07.832 }, 00:13:07.832 { 00:13:07.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.832 "dma_device_type": 2 00:13:07.832 } 00:13:07.832 ], 00:13:07.832 "driver_specific": {} 00:13:07.832 } 00:13:07.832 ]' 00:13:07.832 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:07.832 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:07.832 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:08.090 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:08.090 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:08.090 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:08.090 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:08.090 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.654 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.654 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:08.654 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.654 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:08.654 12:27:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:10.552 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:10.808 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:11.373 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.306 ************************************ 00:13:12.306 START TEST filesystem_ext4 00:13:12.306 ************************************ 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:12.306 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:12.306 mke2fs 1.47.0 (5-Feb-2023) 00:13:12.564 Discarding device blocks: 0/522240 done 00:13:12.564 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:12.564 Filesystem UUID: 02126176-c5c5-41f7-8c63-31a8e9583515 00:13:12.564 Superblock backups stored on blocks: 00:13:12.564 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:12.564 00:13:12.564 Allocating group tables: 0/64 done 00:13:12.564 Writing inode tables: 0/64 done 00:13:12.564 Creating journal (8192 blocks): done 00:13:12.564 Writing superblocks and filesystem accounting information: 0/64 done 00:13:12.564 00:13:12.564 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:12.564 12:27:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 578791 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:19.118 00:13:19.118 real 0m6.443s 00:13:19.118 user 0m0.028s 00:13:19.118 sys 0m0.055s 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 ************************************ 00:13:19.118 END TEST filesystem_ext4 00:13:19.118 ************************************ 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 ************************************ 00:13:19.118 START TEST filesystem_btrfs 00:13:19.118 ************************************ 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:19.118 12:27:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:19.118 btrfs-progs v6.8.1 00:13:19.118 See https://btrfs.readthedocs.io for more information. 00:13:19.118 00:13:19.118 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:19.118 NOTE: several default settings have changed in version 5.15, please make sure 00:13:19.118 this does not affect your deployments: 00:13:19.118 - DUP for metadata (-m dup) 00:13:19.118 - enabled no-holes (-O no-holes) 00:13:19.118 - enabled free-space-tree (-R free-space-tree) 00:13:19.118 00:13:19.118 Label: (null) 00:13:19.118 UUID: f0e3f02e-8eee-4a23-97d8-6c4f9df38c86 00:13:19.118 Node size: 16384 00:13:19.118 Sector size: 4096 (CPU page size: 4096) 00:13:19.118 Filesystem size: 510.00MiB 00:13:19.118 Block group profiles: 00:13:19.118 Data: single 8.00MiB 00:13:19.118 Metadata: DUP 32.00MiB 00:13:19.118 System: DUP 8.00MiB 00:13:19.118 SSD detected: yes 00:13:19.118 Zoned device: no 00:13:19.118 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:19.118 Checksum: crc32c 00:13:19.118 Number of devices: 1 00:13:19.118 Devices: 00:13:19.118 ID SIZE PATH 00:13:19.118 1 510.00MiB /dev/nvme0n1p1 00:13:19.118 00:13:19.118 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:19.118 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 578791 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:19.684 00:13:19.684 real 0m0.929s 00:13:19.684 user 0m0.023s 00:13:19.684 sys 0m0.101s 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:19.684 ************************************ 00:13:19.684 END TEST filesystem_btrfs 00:13:19.684 ************************************ 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.684 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.942 ************************************ 00:13:19.942 START TEST filesystem_xfs 00:13:19.942 ************************************ 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:19.942 12:27:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:19.942 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:19.942 = sectsz=512 attr=2, projid32bit=1 00:13:19.942 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:19.942 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:19.942 data = bsize=4096 blocks=130560, imaxpct=25 00:13:19.942 = sunit=0 swidth=0 blks 00:13:19.942 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:19.942 log =internal log bsize=4096 blocks=16384, version=2 00:13:19.942 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:19.942 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:20.874 Discarding blocks...Done. 00:13:20.874 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:20.874 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:23.398 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 578791 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:23.399 00:13:23.399 real 0m3.623s 00:13:23.399 user 0m0.022s 00:13:23.399 sys 0m0.061s 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:23.399 ************************************ 00:13:23.399 END TEST filesystem_xfs 00:13:23.399 ************************************ 00:13:23.399 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:23.657 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:23.657 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 578791 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 578791 ']' 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 578791 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 578791 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 578791' 00:13:23.915 killing process with pid 578791 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 578791 00:13:23.915 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 578791 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:24.173 00:13:24.173 real 0m16.899s 00:13:24.173 user 1m5.616s 00:13:24.173 sys 0m2.086s 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.173 ************************************ 00:13:24.173 END TEST nvmf_filesystem_no_in_capsule 00:13:24.173 ************************************ 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.173 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:24.433 ************************************ 00:13:24.433 START TEST nvmf_filesystem_in_capsule 00:13:24.433 ************************************ 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=581011 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 581011 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 581011 ']' 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.433 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.433 [2024-11-05 12:27:53.483918] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:13:24.433 [2024-11-05 12:27:53.484012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.433 [2024-11-05 12:27:53.560086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.433 [2024-11-05 12:27:53.609972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.433 [2024-11-05 12:27:53.610042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.433 [2024-11-05 12:27:53.610058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.433 [2024-11-05 12:27:53.610070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.433 [2024-11-05 12:27:53.610080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.433 [2024-11-05 12:27:53.611705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.433 [2024-11-05 12:27:53.611759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.433 [2024-11-05 12:27:53.611782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.433 [2024-11-05 12:27:53.611787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.692 [2024-11-05 12:27:53.768467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.692 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.951 Malloc1 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.951 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.951 [2024-11-05 12:27:53.963553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:24.952 { 00:13:24.952 "name": "Malloc1", 00:13:24.952 "aliases": [ 00:13:24.952 "e474e651-d195-451e-b9d8-39c33d93e441" 00:13:24.952 ], 00:13:24.952 "product_name": "Malloc disk", 00:13:24.952 "block_size": 512, 00:13:24.952 "num_blocks": 1048576, 00:13:24.952 "uuid": "e474e651-d195-451e-b9d8-39c33d93e441", 00:13:24.952 "assigned_rate_limits": { 00:13:24.952 "rw_ios_per_sec": 0, 00:13:24.952 "rw_mbytes_per_sec": 0, 00:13:24.952 "r_mbytes_per_sec": 0, 00:13:24.952 "w_mbytes_per_sec": 0 00:13:24.952 }, 00:13:24.952 "claimed": true, 00:13:24.952 "claim_type": "exclusive_write", 00:13:24.952 "zoned": false, 00:13:24.952 "supported_io_types": { 00:13:24.952 "read": true, 00:13:24.952 "write": true, 00:13:24.952 "unmap": true, 00:13:24.952 "flush": true, 00:13:24.952 "reset": true, 00:13:24.952 "nvme_admin": false, 00:13:24.952 "nvme_io": false, 00:13:24.952 "nvme_io_md": false, 00:13:24.952 "write_zeroes": true, 00:13:24.952 "zcopy": true, 00:13:24.952 "get_zone_info": false, 00:13:24.952 "zone_management": false, 00:13:24.952 "zone_append": false, 00:13:24.952 "compare": false, 00:13:24.952 "compare_and_write": false, 00:13:24.952 "abort": true, 00:13:24.952 "seek_hole": false, 00:13:24.952 "seek_data": false, 00:13:24.952 "copy": true, 00:13:24.952 "nvme_iov_md": false 00:13:24.952 }, 00:13:24.952 "memory_domains": [ 00:13:24.952 { 00:13:24.952 "dma_device_id": "system", 00:13:24.952 "dma_device_type": 1 00:13:24.952 }, 00:13:24.952 { 00:13:24.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.952 "dma_device_type": 2 00:13:24.952 } 00:13:24.952 ], 00:13:24.952 "driver_specific": {} 00:13:24.952 } 00:13:24.952 ]' 00:13:24.952 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:24.952 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.887 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:25.888 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:25.888 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:25.888 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:25.888 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:27.791 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:28.050 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:28.616 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.991 ************************************ 00:13:29.991 START TEST filesystem_in_capsule_ext4 00:13:29.991 ************************************ 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:29.991 12:27:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:29.991 mke2fs 1.47.0 (5-Feb-2023) 00:13:29.991 Discarding device blocks: 0/522240 done 00:13:29.991 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:29.991 Filesystem UUID: 3a6df6d8-2f77-4f59-a939-a7ecd92ca4f5 00:13:29.991 Superblock backups stored on blocks: 00:13:29.991 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:29.991 00:13:29.991 Allocating group tables: 0/64 done 00:13:29.991 Writing inode tables: 0/64 done 00:13:29.991 Creating journal (8192 blocks): done 00:13:32.123 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:13:32.123 00:13:32.123 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:32.123 12:28:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 581011 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.683 00:13:38.683 real 0m8.111s 00:13:38.683 user 0m0.025s 00:13:38.683 sys 0m0.058s 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.683 12:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:38.683 ************************************ 00:13:38.683 END TEST filesystem_in_capsule_ext4 00:13:38.683 ************************************ 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.683 ************************************ 00:13:38.683 START TEST filesystem_in_capsule_btrfs 00:13:38.683 ************************************ 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:38.683 btrfs-progs v6.8.1 00:13:38.683 See https://btrfs.readthedocs.io for more information. 00:13:38.683 00:13:38.683 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:38.683 NOTE: several default settings have changed in version 5.15, please make sure 00:13:38.683 this does not affect your deployments: 00:13:38.683 - DUP for metadata (-m dup) 00:13:38.683 - enabled no-holes (-O no-holes) 00:13:38.683 - enabled free-space-tree (-R free-space-tree) 00:13:38.683 00:13:38.683 Label: (null) 00:13:38.683 UUID: 0ba4df83-2ffb-40bf-b11b-f091a6607953 00:13:38.683 Node size: 16384 00:13:38.683 Sector size: 4096 (CPU page size: 4096) 00:13:38.683 Filesystem size: 510.00MiB 00:13:38.683 Block group profiles: 00:13:38.683 Data: single 8.00MiB 00:13:38.683 Metadata: DUP 32.00MiB 00:13:38.683 System: DUP 8.00MiB 00:13:38.683 SSD detected: yes 00:13:38.683 Zoned device: no 00:13:38.683 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:38.683 Checksum: crc32c 00:13:38.683 Number of devices: 1 00:13:38.683 Devices: 00:13:38.683 ID SIZE PATH 00:13:38.683 1 510.00MiB /dev/nvme0n1p1 00:13:38.683 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 581011 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:38.683 00:13:38.683 real 0m0.783s 00:13:38.683 user 0m0.026s 00:13:38.683 sys 0m0.089s 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:38.683 ************************************ 00:13:38.683 END TEST filesystem_in_capsule_btrfs 00:13:38.683 ************************************ 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.683 ************************************ 00:13:38.683 START TEST filesystem_in_capsule_xfs 00:13:38.683 ************************************ 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:38.683 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:13:38.684 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:38.684 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:38.684 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:38.942 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:38.942 = sectsz=512 attr=2, projid32bit=1 00:13:38.942 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:38.942 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:38.942 data = bsize=4096 blocks=130560, imaxpct=25 00:13:38.942 = sunit=0 swidth=0 blks 00:13:38.942 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:38.942 log =internal log bsize=4096 blocks=16384, version=2 00:13:38.942 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:38.942 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:39.929 Discarding blocks...Done. 00:13:39.929 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:39.929 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 581011 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.870 00:13:41.870 real 0m2.878s 00:13:41.870 user 0m0.010s 00:13:41.870 sys 0m0.065s 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:41.870 ************************************ 00:13:41.870 END TEST filesystem_in_capsule_xfs 00:13:41.870 ************************************ 00:13:41.870 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:41.870 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:41.870 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 581011 00:13:42.128 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 581011 ']' 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 581011 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 581011 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 581011' 00:13:42.129 killing process with pid 581011 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 581011 00:13:42.129 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 581011 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:42.387 00:13:42.387 real 0m18.158s 00:13:42.387 user 1m10.587s 00:13:42.387 sys 0m2.108s 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:42.387 ************************************ 00:13:42.387 END TEST nvmf_filesystem_in_capsule 00:13:42.387 ************************************ 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.387 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.387 rmmod nvme_tcp 00:13:42.647 rmmod nvme_fabrics 00:13:42.647 rmmod nvme_keyring 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.647 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.556 00:13:44.556 real 0m39.996s 00:13:44.556 user 2m17.279s 00:13:44.556 sys 0m5.979s 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.556 ************************************ 00:13:44.556 END TEST nvmf_filesystem 00:13:44.556 ************************************ 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.556 ************************************ 00:13:44.556 START TEST nvmf_target_discovery 00:13:44.556 ************************************ 00:13:44.556 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:44.816 * Looking for test storage... 00:13:44.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.816 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:44.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.817 --rc genhtml_branch_coverage=1 00:13:44.817 --rc genhtml_function_coverage=1 00:13:44.817 --rc genhtml_legend=1 00:13:44.817 --rc geninfo_all_blocks=1 00:13:44.817 --rc geninfo_unexecuted_blocks=1 00:13:44.817 00:13:44.817 ' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:44.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.817 --rc genhtml_branch_coverage=1 00:13:44.817 --rc genhtml_function_coverage=1 00:13:44.817 --rc genhtml_legend=1 00:13:44.817 --rc geninfo_all_blocks=1 00:13:44.817 --rc geninfo_unexecuted_blocks=1 00:13:44.817 00:13:44.817 ' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:44.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.817 --rc genhtml_branch_coverage=1 00:13:44.817 --rc genhtml_function_coverage=1 00:13:44.817 --rc genhtml_legend=1 00:13:44.817 --rc geninfo_all_blocks=1 00:13:44.817 --rc geninfo_unexecuted_blocks=1 00:13:44.817 00:13:44.817 ' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:44.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.817 --rc genhtml_branch_coverage=1 00:13:44.817 --rc genhtml_function_coverage=1 00:13:44.817 --rc genhtml_legend=1 00:13:44.817 --rc geninfo_all_blocks=1 00:13:44.817 --rc geninfo_unexecuted_blocks=1 00:13:44.817 00:13:44.817 ' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.817 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.818 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:47.350 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:47.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:13:47.351 00:13:47.351 --- 10.0.0.2 ping statistics --- 00:13:47.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.351 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:13:47.351 00:13:47.351 --- 10.0.0.1 ping statistics --- 00:13:47.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.351 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:47.351 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=585310 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 585310 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 585310 ']' 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:47.352 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.352 [2024-11-05 12:28:16.459122] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:13:47.352 [2024-11-05 12:28:16.459231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.352 [2024-11-05 12:28:16.535755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.352 [2024-11-05 12:28:16.585213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.352 [2024-11-05 12:28:16.585278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.352 [2024-11-05 12:28:16.585291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.352 [2024-11-05 12:28:16.585302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.352 [2024-11-05 12:28:16.585312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.352 [2024-11-05 12:28:16.587099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.352 [2024-11-05 12:28:16.587170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.352 [2024-11-05 12:28:16.587173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.352 [2024-11-05 12:28:16.587122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 [2024-11-05 12:28:16.732438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 Null1 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 [2024-11-05 12:28:16.772740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 Null2 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 Null3 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.611 Null4 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.611 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.870 12:28:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:47.870 00:13:47.870 Discovery Log Number of Records 6, Generation counter 6 00:13:47.870 =====Discovery Log Entry 0====== 00:13:47.870 trtype: tcp 00:13:47.870 adrfam: ipv4 00:13:47.870 subtype: current discovery subsystem 00:13:47.870 treq: not required 00:13:47.870 portid: 0 00:13:47.870 trsvcid: 4420 00:13:47.870 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:47.870 traddr: 10.0.0.2 00:13:47.870 eflags: explicit discovery connections, duplicate discovery information 00:13:47.870 sectype: none 00:13:47.870 =====Discovery Log Entry 1====== 00:13:47.870 trtype: tcp 00:13:47.870 adrfam: ipv4 00:13:47.870 subtype: nvme subsystem 00:13:47.870 treq: not required 00:13:47.870 portid: 0 00:13:47.870 trsvcid: 4420 00:13:47.870 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:47.870 traddr: 10.0.0.2 00:13:47.870 eflags: none 00:13:47.870 sectype: none 00:13:47.870 =====Discovery Log Entry 2====== 00:13:47.870 trtype: tcp 00:13:47.870 adrfam: ipv4 00:13:47.870 subtype: nvme subsystem 00:13:47.870 treq: not required 00:13:47.870 portid: 0 00:13:47.870 trsvcid: 4420 00:13:47.870 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:47.870 traddr: 10.0.0.2 00:13:47.870 eflags: none 00:13:47.870 sectype: none 00:13:47.870 =====Discovery Log Entry 3====== 00:13:47.870 trtype: tcp 00:13:47.870 adrfam: ipv4 00:13:47.870 subtype: nvme subsystem 00:13:47.870 treq: not required 00:13:47.870 portid: 0 00:13:47.870 trsvcid: 4420 00:13:47.870 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:47.870 traddr: 10.0.0.2 00:13:47.870 eflags: none 00:13:47.870 sectype: none 00:13:47.870 =====Discovery Log Entry 4====== 00:13:47.870 trtype: tcp 00:13:47.870 adrfam: ipv4 00:13:47.870 subtype: nvme subsystem 00:13:47.870 treq: not required 00:13:47.870 portid: 0 00:13:47.870 trsvcid: 4420 00:13:47.870 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:47.870 traddr: 10.0.0.2 00:13:47.870 eflags: none 00:13:47.870 sectype: none 00:13:47.870 =====Discovery Log Entry 5====== 00:13:47.870 trtype: tcp 00:13:47.870 adrfam: ipv4 00:13:47.870 subtype: discovery subsystem referral 00:13:47.870 treq: not required 00:13:47.870 portid: 0 00:13:47.870 trsvcid: 4430 00:13:47.870 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:47.870 traddr: 10.0.0.2 00:13:47.870 eflags: none 00:13:47.870 sectype: none 00:13:47.870 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:47.870 Perform nvmf subsystem discovery via RPC 00:13:47.870 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:47.870 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.870 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.130 [ 00:13:48.130 { 00:13:48.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:48.130 "subtype": "Discovery", 00:13:48.130 "listen_addresses": [ 00:13:48.130 { 00:13:48.130 "trtype": "TCP", 00:13:48.130 "adrfam": "IPv4", 00:13:48.130 "traddr": "10.0.0.2", 00:13:48.130 "trsvcid": "4420" 00:13:48.130 } 00:13:48.130 ], 00:13:48.130 "allow_any_host": true, 00:13:48.130 "hosts": [] 00:13:48.130 }, 00:13:48.130 { 00:13:48.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.130 "subtype": "NVMe", 00:13:48.130 "listen_addresses": [ 00:13:48.130 { 00:13:48.130 "trtype": "TCP", 00:13:48.130 "adrfam": "IPv4", 00:13:48.130 "traddr": "10.0.0.2", 00:13:48.130 "trsvcid": "4420" 00:13:48.130 } 00:13:48.130 ], 00:13:48.130 "allow_any_host": true, 00:13:48.130 "hosts": [], 00:13:48.130 "serial_number": "SPDK00000000000001", 00:13:48.130 "model_number": "SPDK bdev Controller", 00:13:48.130 "max_namespaces": 32, 00:13:48.130 "min_cntlid": 1, 00:13:48.130 "max_cntlid": 65519, 00:13:48.130 "namespaces": [ 00:13:48.130 { 00:13:48.130 "nsid": 1, 00:13:48.130 "bdev_name": "Null1", 00:13:48.130 "name": "Null1", 00:13:48.130 "nguid": "32D6B05B91EF4769936452F2CB5BD313", 00:13:48.130 "uuid": "32d6b05b-91ef-4769-9364-52f2cb5bd313" 00:13:48.130 } 00:13:48.130 ] 00:13:48.130 }, 00:13:48.130 { 00:13:48.130 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:48.130 "subtype": "NVMe", 00:13:48.130 "listen_addresses": [ 00:13:48.130 { 00:13:48.130 "trtype": "TCP", 00:13:48.130 "adrfam": "IPv4", 00:13:48.130 "traddr": "10.0.0.2", 00:13:48.130 "trsvcid": "4420" 00:13:48.130 } 00:13:48.130 ], 00:13:48.130 "allow_any_host": true, 00:13:48.130 "hosts": [], 00:13:48.130 "serial_number": "SPDK00000000000002", 00:13:48.130 "model_number": "SPDK bdev Controller", 00:13:48.130 "max_namespaces": 32, 00:13:48.130 "min_cntlid": 1, 00:13:48.130 "max_cntlid": 65519, 00:13:48.130 "namespaces": [ 00:13:48.130 { 00:13:48.130 "nsid": 1, 00:13:48.130 "bdev_name": "Null2", 00:13:48.130 "name": "Null2", 00:13:48.130 "nguid": "83ECD14B47B0417589B60784E88E2B83", 00:13:48.130 "uuid": "83ecd14b-47b0-4175-89b6-0784e88e2b83" 00:13:48.130 } 00:13:48.130 ] 00:13:48.130 }, 00:13:48.130 { 00:13:48.130 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:48.130 "subtype": "NVMe", 00:13:48.130 "listen_addresses": [ 00:13:48.130 { 00:13:48.130 "trtype": "TCP", 00:13:48.130 "adrfam": "IPv4", 00:13:48.130 "traddr": "10.0.0.2", 00:13:48.130 "trsvcid": "4420" 00:13:48.130 } 00:13:48.130 ], 00:13:48.130 "allow_any_host": true, 00:13:48.130 "hosts": [], 00:13:48.130 "serial_number": "SPDK00000000000003", 00:13:48.130 "model_number": "SPDK bdev Controller", 00:13:48.130 "max_namespaces": 32, 00:13:48.130 "min_cntlid": 1, 00:13:48.130 "max_cntlid": 65519, 00:13:48.130 "namespaces": [ 00:13:48.130 { 00:13:48.130 "nsid": 1, 00:13:48.130 "bdev_name": "Null3", 00:13:48.130 "name": "Null3", 00:13:48.130 "nguid": "0321ABBEA870457BB1080FB1357806CD", 00:13:48.130 "uuid": "0321abbe-a870-457b-b108-0fb1357806cd" 00:13:48.130 } 00:13:48.130 ] 00:13:48.130 }, 00:13:48.130 { 00:13:48.130 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:48.130 "subtype": "NVMe", 00:13:48.131 "listen_addresses": [ 00:13:48.131 { 00:13:48.131 "trtype": "TCP", 00:13:48.131 "adrfam": "IPv4", 00:13:48.131 "traddr": "10.0.0.2", 00:13:48.131 "trsvcid": "4420" 00:13:48.131 } 00:13:48.131 ], 00:13:48.131 "allow_any_host": true, 00:13:48.131 "hosts": [], 00:13:48.131 "serial_number": "SPDK00000000000004", 00:13:48.131 "model_number": "SPDK bdev Controller", 00:13:48.131 "max_namespaces": 32, 00:13:48.131 "min_cntlid": 1, 00:13:48.131 "max_cntlid": 65519, 00:13:48.131 "namespaces": [ 00:13:48.131 { 00:13:48.131 "nsid": 1, 00:13:48.131 "bdev_name": "Null4", 00:13:48.131 "name": "Null4", 00:13:48.131 "nguid": "D27DB30426D14B84AE5B5D05435F2E29", 00:13:48.131 "uuid": "d27db304-26d1-4b84-ae5b-5d05435f2e29" 00:13:48.131 } 00:13:48.131 ] 00:13:48.131 } 00:13:48.131 ] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.131 rmmod nvme_tcp 00:13:48.131 rmmod nvme_fabrics 00:13:48.131 rmmod nvme_keyring 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 585310 ']' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 585310 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 585310 ']' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 585310 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 585310 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 585310' 00:13:48.131 killing process with pid 585310 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 585310 00:13:48.131 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 585310 00:13:48.393 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.393 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.394 12:28:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.932 00:13:50.932 real 0m5.805s 00:13:50.932 user 0m4.833s 00:13:50.932 sys 0m2.022s 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 ************************************ 00:13:50.932 END TEST nvmf_target_discovery 00:13:50.932 ************************************ 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.932 12:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.932 ************************************ 00:13:50.932 START TEST nvmf_referrals 00:13:50.932 ************************************ 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:50.933 * Looking for test storage... 00:13:50.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.933 --rc genhtml_branch_coverage=1 00:13:50.933 --rc genhtml_function_coverage=1 00:13:50.933 --rc genhtml_legend=1 00:13:50.933 --rc geninfo_all_blocks=1 00:13:50.933 --rc geninfo_unexecuted_blocks=1 00:13:50.933 00:13:50.933 ' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.933 --rc genhtml_branch_coverage=1 00:13:50.933 --rc genhtml_function_coverage=1 00:13:50.933 --rc genhtml_legend=1 00:13:50.933 --rc geninfo_all_blocks=1 00:13:50.933 --rc geninfo_unexecuted_blocks=1 00:13:50.933 00:13:50.933 ' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.933 --rc genhtml_branch_coverage=1 00:13:50.933 --rc genhtml_function_coverage=1 00:13:50.933 --rc genhtml_legend=1 00:13:50.933 --rc geninfo_all_blocks=1 00:13:50.933 --rc geninfo_unexecuted_blocks=1 00:13:50.933 00:13:50.933 ' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:50.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.933 --rc genhtml_branch_coverage=1 00:13:50.933 --rc genhtml_function_coverage=1 00:13:50.933 --rc genhtml_legend=1 00:13:50.933 --rc geninfo_all_blocks=1 00:13:50.933 --rc geninfo_unexecuted_blocks=1 00:13:50.933 00:13:50.933 ' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.933 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:50.934 12:28:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:52.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:52.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:52.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:52.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.838 12:28:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.838 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.838 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.838 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.838 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:13:53.097 00:13:53.097 --- 10.0.0.2 ping statistics --- 00:13:53.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.097 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:13:53.097 00:13:53.097 --- 10.0.0.1 ping statistics --- 00:13:53.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.097 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=587413 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 587413 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 587413 ']' 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.097 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.097 [2024-11-05 12:28:22.189723] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:13:53.097 [2024-11-05 12:28:22.189814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.097 [2024-11-05 12:28:22.262455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.097 [2024-11-05 12:28:22.309054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.097 [2024-11-05 12:28:22.309104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.097 [2024-11-05 12:28:22.309132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.097 [2024-11-05 12:28:22.309143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.097 [2024-11-05 12:28:22.309153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.097 [2024-11-05 12:28:22.310716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.097 [2024-11-05 12:28:22.310783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.097 [2024-11-05 12:28:22.310868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.097 [2024-11-05 12:28:22.310870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.355 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 [2024-11-05 12:28:22.457180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 [2024-11-05 12:28:22.469462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:53.356 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.614 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:53.872 12:28:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:53.872 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:54.129 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.130 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:54.388 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:54.388 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:54.388 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:54.388 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:54.388 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.388 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:54.646 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.904 12:28:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:54.904 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:54.904 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:54.904 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:54.904 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:54.904 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.904 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:55.161 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:55.162 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.420 rmmod nvme_tcp 00:13:55.420 rmmod nvme_fabrics 00:13:55.420 rmmod nvme_keyring 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 587413 ']' 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 587413 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 587413 ']' 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 587413 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 587413 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 587413' 00:13:55.420 killing process with pid 587413 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 587413 00:13:55.420 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 587413 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.678 12:28:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.588 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:57.588 00:13:57.588 real 0m7.179s 00:13:57.588 user 0m11.356s 00:13:57.588 sys 0m2.400s 00:13:57.588 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.588 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:57.588 ************************************ 00:13:57.588 END TEST nvmf_referrals 00:13:57.588 ************************************ 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.848 ************************************ 00:13:57.848 START TEST nvmf_connect_disconnect 00:13:57.848 ************************************ 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:57.848 * Looking for test storage... 00:13:57.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:13:57.848 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:57.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.848 --rc genhtml_branch_coverage=1 00:13:57.848 --rc genhtml_function_coverage=1 00:13:57.848 --rc genhtml_legend=1 00:13:57.848 --rc geninfo_all_blocks=1 00:13:57.848 --rc geninfo_unexecuted_blocks=1 00:13:57.848 00:13:57.848 ' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:57.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.848 --rc genhtml_branch_coverage=1 00:13:57.848 --rc genhtml_function_coverage=1 00:13:57.848 --rc genhtml_legend=1 00:13:57.848 --rc geninfo_all_blocks=1 00:13:57.848 --rc geninfo_unexecuted_blocks=1 00:13:57.848 00:13:57.848 ' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:57.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.848 --rc genhtml_branch_coverage=1 00:13:57.848 --rc genhtml_function_coverage=1 00:13:57.848 --rc genhtml_legend=1 00:13:57.848 --rc geninfo_all_blocks=1 00:13:57.848 --rc geninfo_unexecuted_blocks=1 00:13:57.848 00:13:57.848 ' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:57.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.848 --rc genhtml_branch_coverage=1 00:13:57.848 --rc genhtml_function_coverage=1 00:13:57.848 --rc genhtml_legend=1 00:13:57.848 --rc geninfo_all_blocks=1 00:13:57.848 --rc geninfo_unexecuted_blocks=1 00:13:57.848 00:13:57.848 ' 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.848 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.849 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:00.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.380 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:00.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:00.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:00.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:14:00.381 00:14:00.381 --- 10.0.0.2 ping statistics --- 00:14:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.381 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:00.381 00:14:00.381 --- 10.0.0.1 ping statistics --- 00:14:00.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.381 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=589717 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 589717 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 589717 ']' 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.381 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.381 [2024-11-05 12:28:29.467347] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:14:00.381 [2024-11-05 12:28:29.467433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.381 [2024-11-05 12:28:29.549360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.381 [2024-11-05 12:28:29.598526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.381 [2024-11-05 12:28:29.598584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.381 [2024-11-05 12:28:29.598612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.381 [2024-11-05 12:28:29.598623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.381 [2024-11-05 12:28:29.598633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.381 [2024-11-05 12:28:29.600133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.381 [2024-11-05 12:28:29.600174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.381 [2024-11-05 12:28:29.600235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.381 [2024-11-05 12:28:29.600238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.639 [2024-11-05 12:28:29.745407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.639 [2024-11-05 12:28:29.819308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:00.639 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:03.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.146 [2024-11-05 12:31:13.971668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8eb0 is same with the state(6) to be set 00:16:45.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.989 [2024-11-05 12:31:23.033598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d69e0 is same with the state(6) to be set 00:16:53.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:14.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:21.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:51.647 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.648 rmmod nvme_tcp 00:17:51.648 rmmod nvme_fabrics 00:17:51.648 rmmod nvme_keyring 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 589717 ']' 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 589717 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 589717 ']' 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 589717 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 589717 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 589717' 00:17:51.648 killing process with pid 589717 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 589717 00:17:51.648 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 589717 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.906 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.813 00:17:53.813 real 3m56.154s 00:17:53.813 user 14m58.505s 00:17:53.813 sys 0m35.845s 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:53.813 ************************************ 00:17:53.813 END TEST nvmf_connect_disconnect 00:17:53.813 ************************************ 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:53.813 12:32:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.072 ************************************ 00:17:54.072 START TEST nvmf_multitarget 00:17:54.072 ************************************ 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:54.072 * Looking for test storage... 00:17:54.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.072 --rc genhtml_branch_coverage=1 00:17:54.072 --rc genhtml_function_coverage=1 00:17:54.072 --rc genhtml_legend=1 00:17:54.072 --rc geninfo_all_blocks=1 00:17:54.072 --rc geninfo_unexecuted_blocks=1 00:17:54.072 00:17:54.072 ' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.072 --rc genhtml_branch_coverage=1 00:17:54.072 --rc genhtml_function_coverage=1 00:17:54.072 --rc genhtml_legend=1 00:17:54.072 --rc geninfo_all_blocks=1 00:17:54.072 --rc geninfo_unexecuted_blocks=1 00:17:54.072 00:17:54.072 ' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.072 --rc genhtml_branch_coverage=1 00:17:54.072 --rc genhtml_function_coverage=1 00:17:54.072 --rc genhtml_legend=1 00:17:54.072 --rc geninfo_all_blocks=1 00:17:54.072 --rc geninfo_unexecuted_blocks=1 00:17:54.072 00:17:54.072 ' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:54.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.072 --rc genhtml_branch_coverage=1 00:17:54.072 --rc genhtml_function_coverage=1 00:17:54.072 --rc genhtml_legend=1 00:17:54.072 --rc geninfo_all_blocks=1 00:17:54.072 --rc geninfo_unexecuted_blocks=1 00:17:54.072 00:17:54.072 ' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.072 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:54.073 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:56.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:56.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:56.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:56.605 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:56.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:56.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:17:56.606 00:17:56.606 --- 10.0.0.2 ping statistics --- 00:17:56.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.606 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:17:56.606 00:17:56.606 --- 10.0.0.1 ping statistics --- 00:17:56.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.606 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=620735 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 620735 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 620735 ']' 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:56.606 [2024-11-05 12:32:25.560051] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:17:56.606 [2024-11-05 12:32:25.560135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.606 [2024-11-05 12:32:25.642618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.606 [2024-11-05 12:32:25.692367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.606 [2024-11-05 12:32:25.692438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.606 [2024-11-05 12:32:25.692451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.606 [2024-11-05 12:32:25.692462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.606 [2024-11-05 12:32:25.692471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.606 [2024-11-05 12:32:25.694116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.606 [2024-11-05 12:32:25.694144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.606 [2024-11-05 12:32:25.694199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.606 [2024-11-05 12:32:25.694202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:56.606 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:56.903 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:56.903 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:56.903 "nvmf_tgt_1" 00:17:56.903 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:57.184 "nvmf_tgt_2" 00:17:57.184 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:57.184 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:57.184 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:57.185 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:57.442 true 00:17:57.442 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:57.442 true 00:17:57.442 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:57.442 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:57.699 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.700 rmmod nvme_tcp 00:17:57.700 rmmod nvme_fabrics 00:17:57.700 rmmod nvme_keyring 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 620735 ']' 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 620735 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 620735 ']' 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 620735 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 620735 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 620735' 00:17:57.700 killing process with pid 620735 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 620735 00:17:57.700 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 620735 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.959 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:59.863 00:17:59.863 real 0m5.992s 00:17:59.863 user 0m7.020s 00:17:59.863 sys 0m2.064s 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:59.863 ************************************ 00:17:59.863 END TEST nvmf_multitarget 00:17:59.863 ************************************ 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:59.863 12:32:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.122 ************************************ 00:18:00.122 START TEST nvmf_rpc 00:18:00.122 ************************************ 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:00.122 * Looking for test storage... 00:18:00.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.122 --rc genhtml_branch_coverage=1 00:18:00.122 --rc genhtml_function_coverage=1 00:18:00.122 --rc genhtml_legend=1 00:18:00.122 --rc geninfo_all_blocks=1 00:18:00.122 --rc geninfo_unexecuted_blocks=1 00:18:00.122 00:18:00.122 ' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.122 --rc genhtml_branch_coverage=1 00:18:00.122 --rc genhtml_function_coverage=1 00:18:00.122 --rc genhtml_legend=1 00:18:00.122 --rc geninfo_all_blocks=1 00:18:00.122 --rc geninfo_unexecuted_blocks=1 00:18:00.122 00:18:00.122 ' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.122 --rc genhtml_branch_coverage=1 00:18:00.122 --rc genhtml_function_coverage=1 00:18:00.122 --rc genhtml_legend=1 00:18:00.122 --rc geninfo_all_blocks=1 00:18:00.122 --rc geninfo_unexecuted_blocks=1 00:18:00.122 00:18:00.122 ' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:00.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.122 --rc genhtml_branch_coverage=1 00:18:00.122 --rc genhtml_function_coverage=1 00:18:00.122 --rc genhtml_legend=1 00:18:00.122 --rc geninfo_all_blocks=1 00:18:00.122 --rc geninfo_unexecuted_blocks=1 00:18:00.122 00:18:00.122 ' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.122 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:00.123 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:02.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:02.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:02.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:02.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.658 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:02.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:18:02.659 00:18:02.659 --- 10.0.0.2 ping statistics --- 00:18:02.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.659 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:18:02.659 00:18:02.659 --- 10.0.0.1 ping statistics --- 00:18:02.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.659 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=622971 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 622971 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 622971 ']' 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.659 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.659 [2024-11-05 12:32:31.698188] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:18:02.659 [2024-11-05 12:32:31.698299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.659 [2024-11-05 12:32:31.778202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.659 [2024-11-05 12:32:31.824676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.659 [2024-11-05 12:32:31.824734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.659 [2024-11-05 12:32:31.824748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.659 [2024-11-05 12:32:31.824759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.659 [2024-11-05 12:32:31.824768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.659 [2024-11-05 12:32:31.826379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.659 [2024-11-05 12:32:31.826403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.659 [2024-11-05 12:32:31.826461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.659 [2024-11-05 12:32:31.826463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.917 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:02.918 "tick_rate": 2700000000, 00:18:02.918 "poll_groups": [ 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_000", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [] 00:18:02.918 }, 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_001", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [] 00:18:02.918 }, 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_002", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [] 00:18:02.918 }, 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_003", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [] 00:18:02.918 } 00:18:02.918 ] 00:18:02.918 }' 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:02.918 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.918 [2024-11-05 12:32:32.047747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:02.918 "tick_rate": 2700000000, 00:18:02.918 "poll_groups": [ 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_000", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [ 00:18:02.918 { 00:18:02.918 "trtype": "TCP" 00:18:02.918 } 00:18:02.918 ] 00:18:02.918 }, 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_001", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [ 00:18:02.918 { 00:18:02.918 "trtype": "TCP" 00:18:02.918 } 00:18:02.918 ] 00:18:02.918 }, 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_002", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [ 00:18:02.918 { 00:18:02.918 "trtype": "TCP" 00:18:02.918 } 00:18:02.918 ] 00:18:02.918 }, 00:18:02.918 { 00:18:02.918 "name": "nvmf_tgt_poll_group_003", 00:18:02.918 "admin_qpairs": 0, 00:18:02.918 "io_qpairs": 0, 00:18:02.918 "current_admin_qpairs": 0, 00:18:02.918 "current_io_qpairs": 0, 00:18:02.918 "pending_bdev_io": 0, 00:18:02.918 "completed_nvme_io": 0, 00:18:02.918 "transports": [ 00:18:02.918 { 00:18:02.918 "trtype": "TCP" 00:18:02.918 } 00:18:02.918 ] 00:18:02.918 } 00:18:02.918 ] 00:18:02.918 }' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.918 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.176 Malloc1 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.176 [2024-11-05 12:32:32.197205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:03.176 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:18:03.177 [2024-11-05 12:32:32.219682] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:18:03.177 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:03.177 could not add new controller: failed to write to nvme-fabrics device 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.177 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.742 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.742 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:03.742 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.742 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:03.742 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:05.642 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:05.642 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:05.642 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.642 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:05.642 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.642 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:05.643 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:18:05.901 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:05.901 [2024-11-05 12:32:34.993979] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:18:05.901 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:05.901 could not add new controller: failed to write to nvme-fabrics device 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.901 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:06.834 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:06.834 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:06.834 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.834 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:06.834 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 [2024-11-05 12:32:37.823077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.732 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:09.297 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:09.297 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:09.297 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.297 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:09.297 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:11.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.823 [2024-11-05 12:32:40.556127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.823 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:12.082 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:12.082 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:12.082 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.082 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:12.082 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.609 [2024-11-05 12:32:43.378325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.609 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:14.867 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:14.867 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:14.867 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.867 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:14.867 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:16.764 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:16.764 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:16.764 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.764 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:16.764 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.764 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:16.764 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.022 [2024-11-05 12:32:46.099756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.022 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.589 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.589 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:17.589 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.589 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:17.589 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.116 [2024-11-05 12:32:48.922051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.116 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.117 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:20.374 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.374 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:18:20.374 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.374 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:18:20.374 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 [2024-11-05 12:32:51.693431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 [2024-11-05 12:32:51.741457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.914 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 [2024-11-05 12:32:51.789620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 [2024-11-05 12:32:51.837800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 [2024-11-05 12:32:51.885981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.915 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:22.915 "tick_rate": 2700000000, 00:18:22.915 "poll_groups": [ 00:18:22.915 { 00:18:22.915 "name": "nvmf_tgt_poll_group_000", 00:18:22.915 "admin_qpairs": 2, 00:18:22.915 "io_qpairs": 84, 00:18:22.915 "current_admin_qpairs": 0, 00:18:22.915 "current_io_qpairs": 0, 00:18:22.915 "pending_bdev_io": 0, 00:18:22.915 "completed_nvme_io": 158, 00:18:22.915 "transports": [ 00:18:22.915 { 00:18:22.915 "trtype": "TCP" 00:18:22.915 } 00:18:22.915 ] 00:18:22.915 }, 00:18:22.915 { 00:18:22.915 "name": "nvmf_tgt_poll_group_001", 00:18:22.915 "admin_qpairs": 2, 00:18:22.915 "io_qpairs": 84, 00:18:22.915 "current_admin_qpairs": 0, 00:18:22.915 "current_io_qpairs": 0, 00:18:22.915 "pending_bdev_io": 0, 00:18:22.915 "completed_nvme_io": 90, 00:18:22.915 "transports": [ 00:18:22.916 { 00:18:22.916 "trtype": "TCP" 00:18:22.916 } 00:18:22.916 ] 00:18:22.916 }, 00:18:22.916 { 00:18:22.916 "name": "nvmf_tgt_poll_group_002", 00:18:22.916 "admin_qpairs": 1, 00:18:22.916 "io_qpairs": 84, 00:18:22.916 "current_admin_qpairs": 0, 00:18:22.916 "current_io_qpairs": 0, 00:18:22.916 "pending_bdev_io": 0, 00:18:22.916 "completed_nvme_io": 230, 00:18:22.916 "transports": [ 00:18:22.916 { 00:18:22.916 "trtype": "TCP" 00:18:22.916 } 00:18:22.916 ] 00:18:22.916 }, 00:18:22.916 { 00:18:22.916 "name": "nvmf_tgt_poll_group_003", 00:18:22.916 "admin_qpairs": 2, 00:18:22.916 "io_qpairs": 84, 00:18:22.916 "current_admin_qpairs": 0, 00:18:22.916 "current_io_qpairs": 0, 00:18:22.916 "pending_bdev_io": 0, 00:18:22.916 "completed_nvme_io": 208, 00:18:22.916 "transports": [ 00:18:22.916 { 00:18:22.916 "trtype": "TCP" 00:18:22.916 } 00:18:22.916 ] 00:18:22.916 } 00:18:22.916 ] 00:18:22.916 }' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:22.916 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:22.916 rmmod nvme_tcp 00:18:22.916 rmmod nvme_fabrics 00:18:22.916 rmmod nvme_keyring 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 622971 ']' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 622971 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 622971 ']' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 622971 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 622971 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 622971' 00:18:22.916 killing process with pid 622971 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 622971 00:18:22.916 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 622971 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.175 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:25.714 00:18:25.714 real 0m25.293s 00:18:25.714 user 1m21.646s 00:18:25.714 sys 0m4.187s 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.714 ************************************ 00:18:25.714 END TEST nvmf_rpc 00:18:25.714 ************************************ 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.714 ************************************ 00:18:25.714 START TEST nvmf_invalid 00:18:25.714 ************************************ 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:25.714 * Looking for test storage... 00:18:25.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.714 --rc genhtml_branch_coverage=1 00:18:25.714 --rc genhtml_function_coverage=1 00:18:25.714 --rc genhtml_legend=1 00:18:25.714 --rc geninfo_all_blocks=1 00:18:25.714 --rc geninfo_unexecuted_blocks=1 00:18:25.714 00:18:25.714 ' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.714 --rc genhtml_branch_coverage=1 00:18:25.714 --rc genhtml_function_coverage=1 00:18:25.714 --rc genhtml_legend=1 00:18:25.714 --rc geninfo_all_blocks=1 00:18:25.714 --rc geninfo_unexecuted_blocks=1 00:18:25.714 00:18:25.714 ' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.714 --rc genhtml_branch_coverage=1 00:18:25.714 --rc genhtml_function_coverage=1 00:18:25.714 --rc genhtml_legend=1 00:18:25.714 --rc geninfo_all_blocks=1 00:18:25.714 --rc geninfo_unexecuted_blocks=1 00:18:25.714 00:18:25.714 ' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:25.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.714 --rc genhtml_branch_coverage=1 00:18:25.714 --rc genhtml_function_coverage=1 00:18:25.714 --rc genhtml_legend=1 00:18:25.714 --rc geninfo_all_blocks=1 00:18:25.714 --rc geninfo_unexecuted_blocks=1 00:18:25.714 00:18:25.714 ' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.714 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:25.715 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:27.620 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.621 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:27.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:18:27.879 00:18:27.879 --- 10.0.0.2 ping statistics --- 00:18:27.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.879 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:18:27.879 00:18:27.879 --- 10.0.0.1 ping statistics --- 00:18:27.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.879 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=627460 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 627460 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 627460 ']' 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.879 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:27.879 [2024-11-05 12:32:57.045626] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:18:27.879 [2024-11-05 12:32:57.045719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.879 [2024-11-05 12:32:57.119430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.138 [2024-11-05 12:32:57.165999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.138 [2024-11-05 12:32:57.166054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.138 [2024-11-05 12:32:57.166076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.138 [2024-11-05 12:32:57.166087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.138 [2024-11-05 12:32:57.166098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.138 [2024-11-05 12:32:57.167697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.138 [2024-11-05 12:32:57.167751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.138 [2024-11-05 12:32:57.167795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.138 [2024-11-05 12:32:57.167797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:28.138 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode14951 00:18:28.396 [2024-11-05 12:32:57.576727] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:28.396 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:28.396 { 00:18:28.396 "nqn": "nqn.2016-06.io.spdk:cnode14951", 00:18:28.396 "tgt_name": "foobar", 00:18:28.396 "method": "nvmf_create_subsystem", 00:18:28.396 "req_id": 1 00:18:28.396 } 00:18:28.396 Got JSON-RPC error response 00:18:28.396 response: 00:18:28.396 { 00:18:28.396 "code": -32603, 00:18:28.396 "message": "Unable to find target foobar" 00:18:28.396 }' 00:18:28.396 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:28.396 { 00:18:28.396 "nqn": "nqn.2016-06.io.spdk:cnode14951", 00:18:28.396 "tgt_name": "foobar", 00:18:28.396 "method": "nvmf_create_subsystem", 00:18:28.396 "req_id": 1 00:18:28.396 } 00:18:28.396 Got JSON-RPC error response 00:18:28.396 response: 00:18:28.396 { 00:18:28.396 "code": -32603, 00:18:28.396 "message": "Unable to find target foobar" 00:18:28.396 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:28.396 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:28.396 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16096 00:18:28.653 [2024-11-05 12:32:57.853706] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16096: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:28.653 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:28.653 { 00:18:28.653 "nqn": "nqn.2016-06.io.spdk:cnode16096", 00:18:28.653 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:28.653 "method": "nvmf_create_subsystem", 00:18:28.654 "req_id": 1 00:18:28.654 } 00:18:28.654 Got JSON-RPC error response 00:18:28.654 response: 00:18:28.654 { 00:18:28.654 "code": -32602, 00:18:28.654 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:28.654 }' 00:18:28.654 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:28.654 { 00:18:28.654 "nqn": "nqn.2016-06.io.spdk:cnode16096", 00:18:28.654 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:28.654 "method": "nvmf_create_subsystem", 00:18:28.654 "req_id": 1 00:18:28.654 } 00:18:28.654 Got JSON-RPC error response 00:18:28.654 response: 00:18:28.654 { 00:18:28.654 "code": -32602, 00:18:28.654 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:28.654 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:28.654 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:28.654 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16586 00:18:28.911 [2024-11-05 12:32:58.142656] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16586: invalid model number 'SPDK_Controller' 00:18:29.169 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:29.169 { 00:18:29.169 "nqn": "nqn.2016-06.io.spdk:cnode16586", 00:18:29.169 "model_number": "SPDK_Controller\u001f", 00:18:29.169 "method": "nvmf_create_subsystem", 00:18:29.169 "req_id": 1 00:18:29.169 } 00:18:29.169 Got JSON-RPC error response 00:18:29.169 response: 00:18:29.169 { 00:18:29.169 "code": -32602, 00:18:29.170 "message": "Invalid MN SPDK_Controller\u001f" 00:18:29.170 }' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:29.170 { 00:18:29.170 "nqn": "nqn.2016-06.io.spdk:cnode16586", 00:18:29.170 "model_number": "SPDK_Controller\u001f", 00:18:29.170 "method": "nvmf_create_subsystem", 00:18:29.170 "req_id": 1 00:18:29.170 } 00:18:29.170 Got JSON-RPC error response 00:18:29.170 response: 00:18:29.170 { 00:18:29.170 "code": -32602, 00:18:29.170 "message": "Invalid MN SPDK_Controller\u001f" 00:18:29.170 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:29.170 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ';}l4F-E*)bp2qtU~^pJ,Q' 00:18:29.171 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';}l4F-E*)bp2qtU~^pJ,Q' nqn.2016-06.io.spdk:cnode18425 00:18:29.429 [2024-11-05 12:32:58.556036] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18425: invalid serial number ';}l4F-E*)bp2qtU~^pJ,Q' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:29.429 { 00:18:29.429 "nqn": "nqn.2016-06.io.spdk:cnode18425", 00:18:29.429 "serial_number": ";}l4F-E*)bp2qtU~^pJ,Q", 00:18:29.429 "method": "nvmf_create_subsystem", 00:18:29.429 "req_id": 1 00:18:29.429 } 00:18:29.429 Got JSON-RPC error response 00:18:29.429 response: 00:18:29.429 { 00:18:29.429 "code": -32602, 00:18:29.429 "message": "Invalid SN ;}l4F-E*)bp2qtU~^pJ,Q" 00:18:29.429 }' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:29.429 { 00:18:29.429 "nqn": "nqn.2016-06.io.spdk:cnode18425", 00:18:29.429 "serial_number": ";}l4F-E*)bp2qtU~^pJ,Q", 00:18:29.429 "method": "nvmf_create_subsystem", 00:18:29.429 "req_id": 1 00:18:29.429 } 00:18:29.429 Got JSON-RPC error response 00:18:29.429 response: 00:18:29.429 { 00:18:29.429 "code": -32602, 00:18:29.429 "message": "Invalid SN ;}l4F-E*)bp2qtU~^pJ,Q" 00:18:29.429 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:29.429 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.430 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:29.688 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'fju7dAQZKQib%E%4$x.}i5VK,hyta#z]h+rNCK[6' 00:18:29.689 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'fju7dAQZKQib%E%4$x.}i5VK,hyta#z]h+rNCK[6' nqn.2016-06.io.spdk:cnode2825 00:18:29.948 [2024-11-05 12:32:58.969378] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2825: invalid model number 'fju7dAQZKQib%E%4$x.}i5VK,hyta#z]h+rNCK[6' 00:18:29.948 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:29.948 { 00:18:29.948 "nqn": "nqn.2016-06.io.spdk:cnode2825", 00:18:29.948 "model_number": "fju7dAQZKQib\u007f%E%4$x.}i5VK,hyta#z]h+rNCK[6", 00:18:29.948 "method": "nvmf_create_subsystem", 00:18:29.948 "req_id": 1 00:18:29.948 } 00:18:29.948 Got JSON-RPC error response 00:18:29.948 response: 00:18:29.948 { 00:18:29.948 "code": -32602, 00:18:29.948 "message": "Invalid MN fju7dAQZKQib\u007f%E%4$x.}i5VK,hyta#z]h+rNCK[6" 00:18:29.948 }' 00:18:29.948 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:29.948 { 00:18:29.948 "nqn": "nqn.2016-06.io.spdk:cnode2825", 00:18:29.948 "model_number": "fju7dAQZKQib\u007f%E%4$x.}i5VK,hyta#z]h+rNCK[6", 00:18:29.948 "method": "nvmf_create_subsystem", 00:18:29.948 "req_id": 1 00:18:29.948 } 00:18:29.948 Got JSON-RPC error response 00:18:29.948 response: 00:18:29.948 { 00:18:29.948 "code": -32602, 00:18:29.948 "message": "Invalid MN fju7dAQZKQib\u007f%E%4$x.}i5VK,hyta#z]h+rNCK[6" 00:18:29.948 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:29.948 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:30.206 [2024-11-05 12:32:59.246367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.206 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:30.463 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:30.463 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:30.463 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:30.463 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:30.463 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:30.720 [2024-11-05 12:32:59.804166] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:30.720 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:30.720 { 00:18:30.720 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:30.720 "listen_address": { 00:18:30.720 "trtype": "tcp", 00:18:30.720 "traddr": "", 00:18:30.720 "trsvcid": "4421" 00:18:30.720 }, 00:18:30.720 "method": "nvmf_subsystem_remove_listener", 00:18:30.720 "req_id": 1 00:18:30.720 } 00:18:30.720 Got JSON-RPC error response 00:18:30.720 response: 00:18:30.720 { 00:18:30.720 "code": -32602, 00:18:30.720 "message": "Invalid parameters" 00:18:30.720 }' 00:18:30.720 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:30.720 { 00:18:30.720 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:30.720 "listen_address": { 00:18:30.720 "trtype": "tcp", 00:18:30.720 "traddr": "", 00:18:30.720 "trsvcid": "4421" 00:18:30.720 }, 00:18:30.720 "method": "nvmf_subsystem_remove_listener", 00:18:30.720 "req_id": 1 00:18:30.720 } 00:18:30.720 Got JSON-RPC error response 00:18:30.720 response: 00:18:30.720 { 00:18:30.720 "code": -32602, 00:18:30.720 "message": "Invalid parameters" 00:18:30.720 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:30.720 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27058 -i 0 00:18:30.977 [2024-11-05 12:33:00.081132] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27058: invalid cntlid range [0-65519] 00:18:30.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:30.978 { 00:18:30.978 "nqn": "nqn.2016-06.io.spdk:cnode27058", 00:18:30.978 "min_cntlid": 0, 00:18:30.978 "method": "nvmf_create_subsystem", 00:18:30.978 "req_id": 1 00:18:30.978 } 00:18:30.978 Got JSON-RPC error response 00:18:30.978 response: 00:18:30.978 { 00:18:30.978 "code": -32602, 00:18:30.978 "message": "Invalid cntlid range [0-65519]" 00:18:30.978 }' 00:18:30.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:30.978 { 00:18:30.978 "nqn": "nqn.2016-06.io.spdk:cnode27058", 00:18:30.978 "min_cntlid": 0, 00:18:30.978 "method": "nvmf_create_subsystem", 00:18:30.978 "req_id": 1 00:18:30.978 } 00:18:30.978 Got JSON-RPC error response 00:18:30.978 response: 00:18:30.978 { 00:18:30.978 "code": -32602, 00:18:30.978 "message": "Invalid cntlid range [0-65519]" 00:18:30.978 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:30.978 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14484 -i 65520 00:18:31.237 [2024-11-05 12:33:00.382161] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14484: invalid cntlid range [65520-65519] 00:18:31.238 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:31.238 { 00:18:31.238 "nqn": "nqn.2016-06.io.spdk:cnode14484", 00:18:31.238 "min_cntlid": 65520, 00:18:31.238 "method": "nvmf_create_subsystem", 00:18:31.238 "req_id": 1 00:18:31.238 } 00:18:31.238 Got JSON-RPC error response 00:18:31.238 response: 00:18:31.238 { 00:18:31.238 "code": -32602, 00:18:31.238 "message": "Invalid cntlid range [65520-65519]" 00:18:31.238 }' 00:18:31.238 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:31.238 { 00:18:31.238 "nqn": "nqn.2016-06.io.spdk:cnode14484", 00:18:31.238 "min_cntlid": 65520, 00:18:31.238 "method": "nvmf_create_subsystem", 00:18:31.238 "req_id": 1 00:18:31.238 } 00:18:31.238 Got JSON-RPC error response 00:18:31.238 response: 00:18:31.238 { 00:18:31.238 "code": -32602, 00:18:31.238 "message": "Invalid cntlid range [65520-65519]" 00:18:31.238 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.238 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8050 -I 0 00:18:31.495 [2024-11-05 12:33:00.671156] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8050: invalid cntlid range [1-0] 00:18:31.495 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:31.495 { 00:18:31.495 "nqn": "nqn.2016-06.io.spdk:cnode8050", 00:18:31.495 "max_cntlid": 0, 00:18:31.495 "method": "nvmf_create_subsystem", 00:18:31.495 "req_id": 1 00:18:31.495 } 00:18:31.495 Got JSON-RPC error response 00:18:31.495 response: 00:18:31.495 { 00:18:31.495 "code": -32602, 00:18:31.495 "message": "Invalid cntlid range [1-0]" 00:18:31.495 }' 00:18:31.495 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:31.495 { 00:18:31.495 "nqn": "nqn.2016-06.io.spdk:cnode8050", 00:18:31.495 "max_cntlid": 0, 00:18:31.495 "method": "nvmf_create_subsystem", 00:18:31.495 "req_id": 1 00:18:31.495 } 00:18:31.495 Got JSON-RPC error response 00:18:31.495 response: 00:18:31.495 { 00:18:31.495 "code": -32602, 00:18:31.495 "message": "Invalid cntlid range [1-0]" 00:18:31.495 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.495 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32507 -I 65520 00:18:31.753 [2024-11-05 12:33:00.968145] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32507: invalid cntlid range [1-65520] 00:18:31.753 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:31.753 { 00:18:31.753 "nqn": "nqn.2016-06.io.spdk:cnode32507", 00:18:31.753 "max_cntlid": 65520, 00:18:31.753 "method": "nvmf_create_subsystem", 00:18:31.753 "req_id": 1 00:18:31.753 } 00:18:31.753 Got JSON-RPC error response 00:18:31.753 response: 00:18:31.753 { 00:18:31.753 "code": -32602, 00:18:31.753 "message": "Invalid cntlid range [1-65520]" 00:18:31.753 }' 00:18:31.753 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:31.753 { 00:18:31.753 "nqn": "nqn.2016-06.io.spdk:cnode32507", 00:18:31.753 "max_cntlid": 65520, 00:18:31.753 "method": "nvmf_create_subsystem", 00:18:31.753 "req_id": 1 00:18:31.753 } 00:18:31.753 Got JSON-RPC error response 00:18:31.753 response: 00:18:31.753 { 00:18:31.753 "code": -32602, 00:18:31.753 "message": "Invalid cntlid range [1-65520]" 00:18:31.753 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.753 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17054 -i 6 -I 5 00:18:32.318 [2024-11-05 12:33:01.253114] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17054: invalid cntlid range [6-5] 00:18:32.318 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:32.318 { 00:18:32.318 "nqn": "nqn.2016-06.io.spdk:cnode17054", 00:18:32.318 "min_cntlid": 6, 00:18:32.318 "max_cntlid": 5, 00:18:32.318 "method": "nvmf_create_subsystem", 00:18:32.318 "req_id": 1 00:18:32.318 } 00:18:32.318 Got JSON-RPC error response 00:18:32.318 response: 00:18:32.318 { 00:18:32.318 "code": -32602, 00:18:32.318 "message": "Invalid cntlid range [6-5]" 00:18:32.318 }' 00:18:32.318 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:32.318 { 00:18:32.318 "nqn": "nqn.2016-06.io.spdk:cnode17054", 00:18:32.318 "min_cntlid": 6, 00:18:32.318 "max_cntlid": 5, 00:18:32.318 "method": "nvmf_create_subsystem", 00:18:32.318 "req_id": 1 00:18:32.318 } 00:18:32.318 Got JSON-RPC error response 00:18:32.318 response: 00:18:32.318 { 00:18:32.318 "code": -32602, 00:18:32.319 "message": "Invalid cntlid range [6-5]" 00:18:32.319 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:32.319 { 00:18:32.319 "name": "foobar", 00:18:32.319 "method": "nvmf_delete_target", 00:18:32.319 "req_id": 1 00:18:32.319 } 00:18:32.319 Got JSON-RPC error response 00:18:32.319 response: 00:18:32.319 { 00:18:32.319 "code": -32602, 00:18:32.319 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:32.319 }' 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:32.319 { 00:18:32.319 "name": "foobar", 00:18:32.319 "method": "nvmf_delete_target", 00:18:32.319 "req_id": 1 00:18:32.319 } 00:18:32.319 Got JSON-RPC error response 00:18:32.319 response: 00:18:32.319 { 00:18:32.319 "code": -32602, 00:18:32.319 "message": "The specified target doesn't exist, cannot delete it." 00:18:32.319 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.319 rmmod nvme_tcp 00:18:32.319 rmmod nvme_fabrics 00:18:32.319 rmmod nvme_keyring 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 627460 ']' 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 627460 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 627460 ']' 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 627460 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 627460 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 627460' 00:18:32.319 killing process with pid 627460 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 627460 00:18:32.319 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 627460 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.577 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:35.112 00:18:35.112 real 0m9.277s 00:18:35.112 user 0m22.297s 00:18:35.112 sys 0m2.654s 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:35.112 ************************************ 00:18:35.112 END TEST nvmf_invalid 00:18:35.112 ************************************ 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.112 ************************************ 00:18:35.112 START TEST nvmf_connect_stress 00:18:35.112 ************************************ 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:35.112 * Looking for test storage... 00:18:35.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.112 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.113 --rc genhtml_branch_coverage=1 00:18:35.113 --rc genhtml_function_coverage=1 00:18:35.113 --rc genhtml_legend=1 00:18:35.113 --rc geninfo_all_blocks=1 00:18:35.113 --rc geninfo_unexecuted_blocks=1 00:18:35.113 00:18:35.113 ' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.113 --rc genhtml_branch_coverage=1 00:18:35.113 --rc genhtml_function_coverage=1 00:18:35.113 --rc genhtml_legend=1 00:18:35.113 --rc geninfo_all_blocks=1 00:18:35.113 --rc geninfo_unexecuted_blocks=1 00:18:35.113 00:18:35.113 ' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.113 --rc genhtml_branch_coverage=1 00:18:35.113 --rc genhtml_function_coverage=1 00:18:35.113 --rc genhtml_legend=1 00:18:35.113 --rc geninfo_all_blocks=1 00:18:35.113 --rc geninfo_unexecuted_blocks=1 00:18:35.113 00:18:35.113 ' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:35.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.113 --rc genhtml_branch_coverage=1 00:18:35.113 --rc genhtml_function_coverage=1 00:18:35.113 --rc genhtml_legend=1 00:18:35.113 --rc geninfo_all_blocks=1 00:18:35.113 --rc geninfo_unexecuted_blocks=1 00:18:35.113 00:18:35.113 ' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:35.113 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:35.114 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:37.020 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:37.020 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:37.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:37.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:37.020 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:37.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:18:37.021 00:18:37.021 --- 10.0.0.2 ping statistics --- 00:18:37.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.021 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:18:37.021 00:18:37.021 --- 10.0.0.1 ping statistics --- 00:18:37.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.021 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=630228 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 630228 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 630228 ']' 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.021 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.280 [2024-11-05 12:33:06.291476] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:18:37.280 [2024-11-05 12:33:06.291556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.280 [2024-11-05 12:33:06.366042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.280 [2024-11-05 12:33:06.410647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.280 [2024-11-05 12:33:06.410728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.280 [2024-11-05 12:33:06.410753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.280 [2024-11-05 12:33:06.410764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.280 [2024-11-05 12:33:06.410773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.280 [2024-11-05 12:33:06.412216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.280 [2024-11-05 12:33:06.412275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.280 [2024-11-05 12:33:06.412279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.560 [2024-11-05 12:33:06.552635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.560 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.561 [2024-11-05 12:33:06.570052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.561 NULL1 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=630367 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.561 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.836 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.836 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:37.836 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.836 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.836 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.099 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.099 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:38.099 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.099 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.099 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.356 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.357 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:38.357 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.357 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.357 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.921 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.921 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:38.921 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.921 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.921 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.178 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.178 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:39.178 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.178 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.178 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.436 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.436 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:39.436 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.436 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.436 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.694 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.694 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:39.694 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.694 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.694 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.258 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.258 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:40.258 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.258 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.258 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.515 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.515 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:40.515 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.515 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.515 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.772 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.772 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:40.772 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.772 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.772 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.029 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.029 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:41.029 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.029 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.029 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.287 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.287 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:41.287 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.287 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.287 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.851 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.851 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:41.851 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.851 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.851 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.108 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.108 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:42.108 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.108 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.109 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.366 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.366 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:42.366 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.366 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.366 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.624 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.624 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:42.624 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.624 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.624 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.881 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.881 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:42.881 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:42.881 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.881 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.446 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.446 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:43.446 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.446 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.446 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.703 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.703 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:43.703 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.703 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.703 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.961 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.961 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:43.961 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.961 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.961 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.217 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.217 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:44.217 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.217 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.217 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.474 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.474 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:44.474 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.474 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.474 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.040 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.040 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:45.040 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.040 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.040 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:45.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.297 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.555 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.555 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:45.555 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.555 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.555 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.812 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.812 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:45.812 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.813 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.813 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.070 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.070 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:46.070 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.070 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.070 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.636 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.636 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:46.636 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.636 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.636 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.893 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.893 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:46.893 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.893 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.893 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.151 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:47.151 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.151 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.151 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.408 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.408 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:47.408 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.408 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.408 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.666 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.666 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.666 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 630367 00:18:47.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (630367) - No such process 00:18:47.666 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 630367 00:18:47.666 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.924 rmmod nvme_tcp 00:18:47.924 rmmod nvme_fabrics 00:18:47.924 rmmod nvme_keyring 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 630228 ']' 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 630228 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 630228 ']' 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 630228 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 630228 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 630228' 00:18:47.924 killing process with pid 630228 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 630228 00:18:47.924 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 630228 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.183 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.089 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:50.089 00:18:50.089 real 0m15.425s 00:18:50.089 user 0m38.350s 00:18:50.089 sys 0m6.030s 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.090 ************************************ 00:18:50.090 END TEST nvmf_connect_stress 00:18:50.090 ************************************ 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:50.090 ************************************ 00:18:50.090 START TEST nvmf_fused_ordering 00:18:50.090 ************************************ 00:18:50.090 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:50.090 * Looking for test storage... 00:18:50.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.349 --rc genhtml_branch_coverage=1 00:18:50.349 --rc genhtml_function_coverage=1 00:18:50.349 --rc genhtml_legend=1 00:18:50.349 --rc geninfo_all_blocks=1 00:18:50.349 --rc geninfo_unexecuted_blocks=1 00:18:50.349 00:18:50.349 ' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.349 --rc genhtml_branch_coverage=1 00:18:50.349 --rc genhtml_function_coverage=1 00:18:50.349 --rc genhtml_legend=1 00:18:50.349 --rc geninfo_all_blocks=1 00:18:50.349 --rc geninfo_unexecuted_blocks=1 00:18:50.349 00:18:50.349 ' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.349 --rc genhtml_branch_coverage=1 00:18:50.349 --rc genhtml_function_coverage=1 00:18:50.349 --rc genhtml_legend=1 00:18:50.349 --rc geninfo_all_blocks=1 00:18:50.349 --rc geninfo_unexecuted_blocks=1 00:18:50.349 00:18:50.349 ' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:50.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.349 --rc genhtml_branch_coverage=1 00:18:50.349 --rc genhtml_function_coverage=1 00:18:50.349 --rc genhtml_legend=1 00:18:50.349 --rc geninfo_all_blocks=1 00:18:50.349 --rc geninfo_unexecuted_blocks=1 00:18:50.349 00:18:50.349 ' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.349 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:50.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:50.350 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:52.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:52.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:52.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:52.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.886 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:18:52.887 00:18:52.887 --- 10.0.0.2 ping statistics --- 00:18:52.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.887 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:18:52.887 00:18:52.887 --- 10.0.0.1 ping statistics --- 00:18:52.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.887 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=634025 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 634025 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 634025 ']' 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 [2024-11-05 12:33:21.757436] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:18:52.887 [2024-11-05 12:33:21.757506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.887 [2024-11-05 12:33:21.827277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.887 [2024-11-05 12:33:21.869872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.887 [2024-11-05 12:33:21.869947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.887 [2024-11-05 12:33:21.869969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.887 [2024-11-05 12:33:21.869979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.887 [2024-11-05 12:33:21.869989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.887 [2024-11-05 12:33:21.870566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.887 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 [2024-11-05 12:33:22.004597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.887 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 [2024-11-05 12:33:22.020834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.888 NULL1 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.888 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:52.888 [2024-11-05 12:33:22.064503] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:18:52.888 [2024-11-05 12:33:22.064537] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634048 ] 00:18:53.454 Attached to nqn.2016-06.io.spdk:cnode1 00:18:53.454 Namespace ID: 1 size: 1GB 00:18:53.454 fused_ordering(0) 00:18:53.454 fused_ordering(1) 00:18:53.454 fused_ordering(2) 00:18:53.454 fused_ordering(3) 00:18:53.454 fused_ordering(4) 00:18:53.454 fused_ordering(5) 00:18:53.454 fused_ordering(6) 00:18:53.454 fused_ordering(7) 00:18:53.454 fused_ordering(8) 00:18:53.454 fused_ordering(9) 00:18:53.454 fused_ordering(10) 00:18:53.454 fused_ordering(11) 00:18:53.454 fused_ordering(12) 00:18:53.454 fused_ordering(13) 00:18:53.454 fused_ordering(14) 00:18:53.454 fused_ordering(15) 00:18:53.454 fused_ordering(16) 00:18:53.454 fused_ordering(17) 00:18:53.454 fused_ordering(18) 00:18:53.454 fused_ordering(19) 00:18:53.454 fused_ordering(20) 00:18:53.454 fused_ordering(21) 00:18:53.454 fused_ordering(22) 00:18:53.454 fused_ordering(23) 00:18:53.454 fused_ordering(24) 00:18:53.454 fused_ordering(25) 00:18:53.454 fused_ordering(26) 00:18:53.454 fused_ordering(27) 00:18:53.454 fused_ordering(28) 00:18:53.454 fused_ordering(29) 00:18:53.454 fused_ordering(30) 00:18:53.454 fused_ordering(31) 00:18:53.454 fused_ordering(32) 00:18:53.454 fused_ordering(33) 00:18:53.454 fused_ordering(34) 00:18:53.454 fused_ordering(35) 00:18:53.454 fused_ordering(36) 00:18:53.454 fused_ordering(37) 00:18:53.454 fused_ordering(38) 00:18:53.454 fused_ordering(39) 00:18:53.454 fused_ordering(40) 00:18:53.454 fused_ordering(41) 00:18:53.454 fused_ordering(42) 00:18:53.454 fused_ordering(43) 00:18:53.454 fused_ordering(44) 00:18:53.454 fused_ordering(45) 00:18:53.454 fused_ordering(46) 00:18:53.454 fused_ordering(47) 00:18:53.454 fused_ordering(48) 00:18:53.454 fused_ordering(49) 00:18:53.454 fused_ordering(50) 00:18:53.454 fused_ordering(51) 00:18:53.454 fused_ordering(52) 00:18:53.454 fused_ordering(53) 00:18:53.454 fused_ordering(54) 00:18:53.454 fused_ordering(55) 00:18:53.454 fused_ordering(56) 00:18:53.454 fused_ordering(57) 00:18:53.454 fused_ordering(58) 00:18:53.454 fused_ordering(59) 00:18:53.454 fused_ordering(60) 00:18:53.454 fused_ordering(61) 00:18:53.454 fused_ordering(62) 00:18:53.454 fused_ordering(63) 00:18:53.454 fused_ordering(64) 00:18:53.454 fused_ordering(65) 00:18:53.454 fused_ordering(66) 00:18:53.454 fused_ordering(67) 00:18:53.454 fused_ordering(68) 00:18:53.454 fused_ordering(69) 00:18:53.454 fused_ordering(70) 00:18:53.454 fused_ordering(71) 00:18:53.454 fused_ordering(72) 00:18:53.454 fused_ordering(73) 00:18:53.454 fused_ordering(74) 00:18:53.454 fused_ordering(75) 00:18:53.454 fused_ordering(76) 00:18:53.454 fused_ordering(77) 00:18:53.454 fused_ordering(78) 00:18:53.454 fused_ordering(79) 00:18:53.454 fused_ordering(80) 00:18:53.454 fused_ordering(81) 00:18:53.454 fused_ordering(82) 00:18:53.454 fused_ordering(83) 00:18:53.454 fused_ordering(84) 00:18:53.454 fused_ordering(85) 00:18:53.454 fused_ordering(86) 00:18:53.454 fused_ordering(87) 00:18:53.454 fused_ordering(88) 00:18:53.454 fused_ordering(89) 00:18:53.454 fused_ordering(90) 00:18:53.454 fused_ordering(91) 00:18:53.454 fused_ordering(92) 00:18:53.454 fused_ordering(93) 00:18:53.454 fused_ordering(94) 00:18:53.454 fused_ordering(95) 00:18:53.454 fused_ordering(96) 00:18:53.454 fused_ordering(97) 00:18:53.454 fused_ordering(98) 00:18:53.454 fused_ordering(99) 00:18:53.454 fused_ordering(100) 00:18:53.454 fused_ordering(101) 00:18:53.454 fused_ordering(102) 00:18:53.454 fused_ordering(103) 00:18:53.454 fused_ordering(104) 00:18:53.454 fused_ordering(105) 00:18:53.454 fused_ordering(106) 00:18:53.454 fused_ordering(107) 00:18:53.454 fused_ordering(108) 00:18:53.454 fused_ordering(109) 00:18:53.454 fused_ordering(110) 00:18:53.454 fused_ordering(111) 00:18:53.454 fused_ordering(112) 00:18:53.454 fused_ordering(113) 00:18:53.454 fused_ordering(114) 00:18:53.454 fused_ordering(115) 00:18:53.454 fused_ordering(116) 00:18:53.454 fused_ordering(117) 00:18:53.454 fused_ordering(118) 00:18:53.454 fused_ordering(119) 00:18:53.454 fused_ordering(120) 00:18:53.454 fused_ordering(121) 00:18:53.454 fused_ordering(122) 00:18:53.454 fused_ordering(123) 00:18:53.454 fused_ordering(124) 00:18:53.454 fused_ordering(125) 00:18:53.454 fused_ordering(126) 00:18:53.454 fused_ordering(127) 00:18:53.454 fused_ordering(128) 00:18:53.454 fused_ordering(129) 00:18:53.454 fused_ordering(130) 00:18:53.454 fused_ordering(131) 00:18:53.454 fused_ordering(132) 00:18:53.454 fused_ordering(133) 00:18:53.454 fused_ordering(134) 00:18:53.454 fused_ordering(135) 00:18:53.454 fused_ordering(136) 00:18:53.454 fused_ordering(137) 00:18:53.454 fused_ordering(138) 00:18:53.454 fused_ordering(139) 00:18:53.454 fused_ordering(140) 00:18:53.454 fused_ordering(141) 00:18:53.454 fused_ordering(142) 00:18:53.454 fused_ordering(143) 00:18:53.454 fused_ordering(144) 00:18:53.454 fused_ordering(145) 00:18:53.454 fused_ordering(146) 00:18:53.454 fused_ordering(147) 00:18:53.454 fused_ordering(148) 00:18:53.454 fused_ordering(149) 00:18:53.454 fused_ordering(150) 00:18:53.454 fused_ordering(151) 00:18:53.455 fused_ordering(152) 00:18:53.455 fused_ordering(153) 00:18:53.455 fused_ordering(154) 00:18:53.455 fused_ordering(155) 00:18:53.455 fused_ordering(156) 00:18:53.455 fused_ordering(157) 00:18:53.455 fused_ordering(158) 00:18:53.455 fused_ordering(159) 00:18:53.455 fused_ordering(160) 00:18:53.455 fused_ordering(161) 00:18:53.455 fused_ordering(162) 00:18:53.455 fused_ordering(163) 00:18:53.455 fused_ordering(164) 00:18:53.455 fused_ordering(165) 00:18:53.455 fused_ordering(166) 00:18:53.455 fused_ordering(167) 00:18:53.455 fused_ordering(168) 00:18:53.455 fused_ordering(169) 00:18:53.455 fused_ordering(170) 00:18:53.455 fused_ordering(171) 00:18:53.455 fused_ordering(172) 00:18:53.455 fused_ordering(173) 00:18:53.455 fused_ordering(174) 00:18:53.455 fused_ordering(175) 00:18:53.455 fused_ordering(176) 00:18:53.455 fused_ordering(177) 00:18:53.455 fused_ordering(178) 00:18:53.455 fused_ordering(179) 00:18:53.455 fused_ordering(180) 00:18:53.455 fused_ordering(181) 00:18:53.455 fused_ordering(182) 00:18:53.455 fused_ordering(183) 00:18:53.455 fused_ordering(184) 00:18:53.455 fused_ordering(185) 00:18:53.455 fused_ordering(186) 00:18:53.455 fused_ordering(187) 00:18:53.455 fused_ordering(188) 00:18:53.455 fused_ordering(189) 00:18:53.455 fused_ordering(190) 00:18:53.455 fused_ordering(191) 00:18:53.455 fused_ordering(192) 00:18:53.455 fused_ordering(193) 00:18:53.455 fused_ordering(194) 00:18:53.455 fused_ordering(195) 00:18:53.455 fused_ordering(196) 00:18:53.455 fused_ordering(197) 00:18:53.455 fused_ordering(198) 00:18:53.455 fused_ordering(199) 00:18:53.455 fused_ordering(200) 00:18:53.455 fused_ordering(201) 00:18:53.455 fused_ordering(202) 00:18:53.455 fused_ordering(203) 00:18:53.455 fused_ordering(204) 00:18:53.455 fused_ordering(205) 00:18:53.713 fused_ordering(206) 00:18:53.713 fused_ordering(207) 00:18:53.713 fused_ordering(208) 00:18:53.713 fused_ordering(209) 00:18:53.713 fused_ordering(210) 00:18:53.713 fused_ordering(211) 00:18:53.713 fused_ordering(212) 00:18:53.713 fused_ordering(213) 00:18:53.713 fused_ordering(214) 00:18:53.713 fused_ordering(215) 00:18:53.713 fused_ordering(216) 00:18:53.713 fused_ordering(217) 00:18:53.713 fused_ordering(218) 00:18:53.713 fused_ordering(219) 00:18:53.713 fused_ordering(220) 00:18:53.713 fused_ordering(221) 00:18:53.713 fused_ordering(222) 00:18:53.713 fused_ordering(223) 00:18:53.713 fused_ordering(224) 00:18:53.713 fused_ordering(225) 00:18:53.713 fused_ordering(226) 00:18:53.713 fused_ordering(227) 00:18:53.713 fused_ordering(228) 00:18:53.713 fused_ordering(229) 00:18:53.713 fused_ordering(230) 00:18:53.713 fused_ordering(231) 00:18:53.713 fused_ordering(232) 00:18:53.713 fused_ordering(233) 00:18:53.713 fused_ordering(234) 00:18:53.713 fused_ordering(235) 00:18:53.713 fused_ordering(236) 00:18:53.713 fused_ordering(237) 00:18:53.713 fused_ordering(238) 00:18:53.713 fused_ordering(239) 00:18:53.713 fused_ordering(240) 00:18:53.713 fused_ordering(241) 00:18:53.713 fused_ordering(242) 00:18:53.713 fused_ordering(243) 00:18:53.713 fused_ordering(244) 00:18:53.713 fused_ordering(245) 00:18:53.713 fused_ordering(246) 00:18:53.713 fused_ordering(247) 00:18:53.713 fused_ordering(248) 00:18:53.713 fused_ordering(249) 00:18:53.713 fused_ordering(250) 00:18:53.713 fused_ordering(251) 00:18:53.713 fused_ordering(252) 00:18:53.713 fused_ordering(253) 00:18:53.713 fused_ordering(254) 00:18:53.713 fused_ordering(255) 00:18:53.713 fused_ordering(256) 00:18:53.713 fused_ordering(257) 00:18:53.713 fused_ordering(258) 00:18:53.713 fused_ordering(259) 00:18:53.713 fused_ordering(260) 00:18:53.713 fused_ordering(261) 00:18:53.713 fused_ordering(262) 00:18:53.713 fused_ordering(263) 00:18:53.713 fused_ordering(264) 00:18:53.713 fused_ordering(265) 00:18:53.713 fused_ordering(266) 00:18:53.713 fused_ordering(267) 00:18:53.713 fused_ordering(268) 00:18:53.713 fused_ordering(269) 00:18:53.713 fused_ordering(270) 00:18:53.713 fused_ordering(271) 00:18:53.713 fused_ordering(272) 00:18:53.713 fused_ordering(273) 00:18:53.713 fused_ordering(274) 00:18:53.713 fused_ordering(275) 00:18:53.713 fused_ordering(276) 00:18:53.713 fused_ordering(277) 00:18:53.713 fused_ordering(278) 00:18:53.713 fused_ordering(279) 00:18:53.713 fused_ordering(280) 00:18:53.713 fused_ordering(281) 00:18:53.713 fused_ordering(282) 00:18:53.713 fused_ordering(283) 00:18:53.713 fused_ordering(284) 00:18:53.713 fused_ordering(285) 00:18:53.713 fused_ordering(286) 00:18:53.713 fused_ordering(287) 00:18:53.713 fused_ordering(288) 00:18:53.713 fused_ordering(289) 00:18:53.713 fused_ordering(290) 00:18:53.713 fused_ordering(291) 00:18:53.713 fused_ordering(292) 00:18:53.713 fused_ordering(293) 00:18:53.713 fused_ordering(294) 00:18:53.713 fused_ordering(295) 00:18:53.713 fused_ordering(296) 00:18:53.713 fused_ordering(297) 00:18:53.713 fused_ordering(298) 00:18:53.713 fused_ordering(299) 00:18:53.713 fused_ordering(300) 00:18:53.713 fused_ordering(301) 00:18:53.713 fused_ordering(302) 00:18:53.713 fused_ordering(303) 00:18:53.713 fused_ordering(304) 00:18:53.713 fused_ordering(305) 00:18:53.713 fused_ordering(306) 00:18:53.713 fused_ordering(307) 00:18:53.713 fused_ordering(308) 00:18:53.713 fused_ordering(309) 00:18:53.713 fused_ordering(310) 00:18:53.713 fused_ordering(311) 00:18:53.713 fused_ordering(312) 00:18:53.713 fused_ordering(313) 00:18:53.713 fused_ordering(314) 00:18:53.713 fused_ordering(315) 00:18:53.713 fused_ordering(316) 00:18:53.713 fused_ordering(317) 00:18:53.713 fused_ordering(318) 00:18:53.713 fused_ordering(319) 00:18:53.713 fused_ordering(320) 00:18:53.713 fused_ordering(321) 00:18:53.713 fused_ordering(322) 00:18:53.713 fused_ordering(323) 00:18:53.713 fused_ordering(324) 00:18:53.713 fused_ordering(325) 00:18:53.713 fused_ordering(326) 00:18:53.713 fused_ordering(327) 00:18:53.713 fused_ordering(328) 00:18:53.713 fused_ordering(329) 00:18:53.713 fused_ordering(330) 00:18:53.713 fused_ordering(331) 00:18:53.713 fused_ordering(332) 00:18:53.713 fused_ordering(333) 00:18:53.713 fused_ordering(334) 00:18:53.713 fused_ordering(335) 00:18:53.713 fused_ordering(336) 00:18:53.713 fused_ordering(337) 00:18:53.713 fused_ordering(338) 00:18:53.713 fused_ordering(339) 00:18:53.713 fused_ordering(340) 00:18:53.713 fused_ordering(341) 00:18:53.713 fused_ordering(342) 00:18:53.713 fused_ordering(343) 00:18:53.713 fused_ordering(344) 00:18:53.713 fused_ordering(345) 00:18:53.713 fused_ordering(346) 00:18:53.713 fused_ordering(347) 00:18:53.713 fused_ordering(348) 00:18:53.713 fused_ordering(349) 00:18:53.713 fused_ordering(350) 00:18:53.713 fused_ordering(351) 00:18:53.713 fused_ordering(352) 00:18:53.714 fused_ordering(353) 00:18:53.714 fused_ordering(354) 00:18:53.714 fused_ordering(355) 00:18:53.714 fused_ordering(356) 00:18:53.714 fused_ordering(357) 00:18:53.714 fused_ordering(358) 00:18:53.714 fused_ordering(359) 00:18:53.714 fused_ordering(360) 00:18:53.714 fused_ordering(361) 00:18:53.714 fused_ordering(362) 00:18:53.714 fused_ordering(363) 00:18:53.714 fused_ordering(364) 00:18:53.714 fused_ordering(365) 00:18:53.714 fused_ordering(366) 00:18:53.714 fused_ordering(367) 00:18:53.714 fused_ordering(368) 00:18:53.714 fused_ordering(369) 00:18:53.714 fused_ordering(370) 00:18:53.714 fused_ordering(371) 00:18:53.714 fused_ordering(372) 00:18:53.714 fused_ordering(373) 00:18:53.714 fused_ordering(374) 00:18:53.714 fused_ordering(375) 00:18:53.714 fused_ordering(376) 00:18:53.714 fused_ordering(377) 00:18:53.714 fused_ordering(378) 00:18:53.714 fused_ordering(379) 00:18:53.714 fused_ordering(380) 00:18:53.714 fused_ordering(381) 00:18:53.714 fused_ordering(382) 00:18:53.714 fused_ordering(383) 00:18:53.714 fused_ordering(384) 00:18:53.714 fused_ordering(385) 00:18:53.714 fused_ordering(386) 00:18:53.714 fused_ordering(387) 00:18:53.714 fused_ordering(388) 00:18:53.714 fused_ordering(389) 00:18:53.714 fused_ordering(390) 00:18:53.714 fused_ordering(391) 00:18:53.714 fused_ordering(392) 00:18:53.714 fused_ordering(393) 00:18:53.714 fused_ordering(394) 00:18:53.714 fused_ordering(395) 00:18:53.714 fused_ordering(396) 00:18:53.714 fused_ordering(397) 00:18:53.714 fused_ordering(398) 00:18:53.714 fused_ordering(399) 00:18:53.714 fused_ordering(400) 00:18:53.714 fused_ordering(401) 00:18:53.714 fused_ordering(402) 00:18:53.714 fused_ordering(403) 00:18:53.714 fused_ordering(404) 00:18:53.714 fused_ordering(405) 00:18:53.714 fused_ordering(406) 00:18:53.714 fused_ordering(407) 00:18:53.714 fused_ordering(408) 00:18:53.714 fused_ordering(409) 00:18:53.714 fused_ordering(410) 00:18:53.972 fused_ordering(411) 00:18:53.972 fused_ordering(412) 00:18:53.972 fused_ordering(413) 00:18:53.972 fused_ordering(414) 00:18:53.972 fused_ordering(415) 00:18:53.972 fused_ordering(416) 00:18:53.972 fused_ordering(417) 00:18:53.972 fused_ordering(418) 00:18:53.972 fused_ordering(419) 00:18:53.972 fused_ordering(420) 00:18:53.972 fused_ordering(421) 00:18:53.972 fused_ordering(422) 00:18:53.972 fused_ordering(423) 00:18:53.972 fused_ordering(424) 00:18:53.972 fused_ordering(425) 00:18:53.972 fused_ordering(426) 00:18:53.972 fused_ordering(427) 00:18:53.972 fused_ordering(428) 00:18:53.972 fused_ordering(429) 00:18:53.972 fused_ordering(430) 00:18:53.972 fused_ordering(431) 00:18:53.972 fused_ordering(432) 00:18:53.972 fused_ordering(433) 00:18:53.972 fused_ordering(434) 00:18:53.972 fused_ordering(435) 00:18:53.972 fused_ordering(436) 00:18:53.972 fused_ordering(437) 00:18:53.972 fused_ordering(438) 00:18:53.972 fused_ordering(439) 00:18:53.972 fused_ordering(440) 00:18:53.972 fused_ordering(441) 00:18:53.972 fused_ordering(442) 00:18:53.972 fused_ordering(443) 00:18:53.972 fused_ordering(444) 00:18:53.972 fused_ordering(445) 00:18:53.972 fused_ordering(446) 00:18:53.972 fused_ordering(447) 00:18:53.972 fused_ordering(448) 00:18:53.972 fused_ordering(449) 00:18:53.972 fused_ordering(450) 00:18:53.972 fused_ordering(451) 00:18:53.972 fused_ordering(452) 00:18:53.972 fused_ordering(453) 00:18:53.972 fused_ordering(454) 00:18:53.972 fused_ordering(455) 00:18:53.972 fused_ordering(456) 00:18:53.972 fused_ordering(457) 00:18:53.972 fused_ordering(458) 00:18:53.972 fused_ordering(459) 00:18:53.972 fused_ordering(460) 00:18:53.972 fused_ordering(461) 00:18:53.972 fused_ordering(462) 00:18:53.972 fused_ordering(463) 00:18:53.972 fused_ordering(464) 00:18:53.972 fused_ordering(465) 00:18:53.972 fused_ordering(466) 00:18:53.972 fused_ordering(467) 00:18:53.972 fused_ordering(468) 00:18:53.972 fused_ordering(469) 00:18:53.972 fused_ordering(470) 00:18:53.972 fused_ordering(471) 00:18:53.972 fused_ordering(472) 00:18:53.972 fused_ordering(473) 00:18:53.972 fused_ordering(474) 00:18:53.972 fused_ordering(475) 00:18:53.972 fused_ordering(476) 00:18:53.972 fused_ordering(477) 00:18:53.972 fused_ordering(478) 00:18:53.972 fused_ordering(479) 00:18:53.972 fused_ordering(480) 00:18:53.972 fused_ordering(481) 00:18:53.972 fused_ordering(482) 00:18:53.972 fused_ordering(483) 00:18:53.972 fused_ordering(484) 00:18:53.972 fused_ordering(485) 00:18:53.972 fused_ordering(486) 00:18:53.972 fused_ordering(487) 00:18:53.972 fused_ordering(488) 00:18:53.972 fused_ordering(489) 00:18:53.972 fused_ordering(490) 00:18:53.972 fused_ordering(491) 00:18:53.972 fused_ordering(492) 00:18:53.972 fused_ordering(493) 00:18:53.972 fused_ordering(494) 00:18:53.972 fused_ordering(495) 00:18:53.972 fused_ordering(496) 00:18:53.972 fused_ordering(497) 00:18:53.972 fused_ordering(498) 00:18:53.972 fused_ordering(499) 00:18:53.972 fused_ordering(500) 00:18:53.972 fused_ordering(501) 00:18:53.972 fused_ordering(502) 00:18:53.972 fused_ordering(503) 00:18:53.972 fused_ordering(504) 00:18:53.972 fused_ordering(505) 00:18:53.972 fused_ordering(506) 00:18:53.972 fused_ordering(507) 00:18:53.972 fused_ordering(508) 00:18:53.972 fused_ordering(509) 00:18:53.972 fused_ordering(510) 00:18:53.972 fused_ordering(511) 00:18:53.972 fused_ordering(512) 00:18:53.972 fused_ordering(513) 00:18:53.972 fused_ordering(514) 00:18:53.972 fused_ordering(515) 00:18:53.972 fused_ordering(516) 00:18:53.972 fused_ordering(517) 00:18:53.972 fused_ordering(518) 00:18:53.972 fused_ordering(519) 00:18:53.972 fused_ordering(520) 00:18:53.972 fused_ordering(521) 00:18:53.972 fused_ordering(522) 00:18:53.972 fused_ordering(523) 00:18:53.972 fused_ordering(524) 00:18:53.972 fused_ordering(525) 00:18:53.972 fused_ordering(526) 00:18:53.972 fused_ordering(527) 00:18:53.972 fused_ordering(528) 00:18:53.972 fused_ordering(529) 00:18:53.972 fused_ordering(530) 00:18:53.972 fused_ordering(531) 00:18:53.972 fused_ordering(532) 00:18:53.972 fused_ordering(533) 00:18:53.972 fused_ordering(534) 00:18:53.972 fused_ordering(535) 00:18:53.972 fused_ordering(536) 00:18:53.972 fused_ordering(537) 00:18:53.972 fused_ordering(538) 00:18:53.972 fused_ordering(539) 00:18:53.972 fused_ordering(540) 00:18:53.972 fused_ordering(541) 00:18:53.972 fused_ordering(542) 00:18:53.972 fused_ordering(543) 00:18:53.972 fused_ordering(544) 00:18:53.972 fused_ordering(545) 00:18:53.972 fused_ordering(546) 00:18:53.972 fused_ordering(547) 00:18:53.972 fused_ordering(548) 00:18:53.972 fused_ordering(549) 00:18:53.972 fused_ordering(550) 00:18:53.972 fused_ordering(551) 00:18:53.972 fused_ordering(552) 00:18:53.972 fused_ordering(553) 00:18:53.972 fused_ordering(554) 00:18:53.972 fused_ordering(555) 00:18:53.972 fused_ordering(556) 00:18:53.972 fused_ordering(557) 00:18:53.972 fused_ordering(558) 00:18:53.972 fused_ordering(559) 00:18:53.972 fused_ordering(560) 00:18:53.972 fused_ordering(561) 00:18:53.972 fused_ordering(562) 00:18:53.972 fused_ordering(563) 00:18:53.972 fused_ordering(564) 00:18:53.972 fused_ordering(565) 00:18:53.972 fused_ordering(566) 00:18:53.972 fused_ordering(567) 00:18:53.972 fused_ordering(568) 00:18:53.972 fused_ordering(569) 00:18:53.972 fused_ordering(570) 00:18:53.972 fused_ordering(571) 00:18:53.972 fused_ordering(572) 00:18:53.972 fused_ordering(573) 00:18:53.972 fused_ordering(574) 00:18:53.972 fused_ordering(575) 00:18:53.972 fused_ordering(576) 00:18:53.972 fused_ordering(577) 00:18:53.973 fused_ordering(578) 00:18:53.973 fused_ordering(579) 00:18:53.973 fused_ordering(580) 00:18:53.973 fused_ordering(581) 00:18:53.973 fused_ordering(582) 00:18:53.973 fused_ordering(583) 00:18:53.973 fused_ordering(584) 00:18:53.973 fused_ordering(585) 00:18:53.973 fused_ordering(586) 00:18:53.973 fused_ordering(587) 00:18:53.973 fused_ordering(588) 00:18:53.973 fused_ordering(589) 00:18:53.973 fused_ordering(590) 00:18:53.973 fused_ordering(591) 00:18:53.973 fused_ordering(592) 00:18:53.973 fused_ordering(593) 00:18:53.973 fused_ordering(594) 00:18:53.973 fused_ordering(595) 00:18:53.973 fused_ordering(596) 00:18:53.973 fused_ordering(597) 00:18:53.973 fused_ordering(598) 00:18:53.973 fused_ordering(599) 00:18:53.973 fused_ordering(600) 00:18:53.973 fused_ordering(601) 00:18:53.973 fused_ordering(602) 00:18:53.973 fused_ordering(603) 00:18:53.973 fused_ordering(604) 00:18:53.973 fused_ordering(605) 00:18:53.973 fused_ordering(606) 00:18:53.973 fused_ordering(607) 00:18:53.973 fused_ordering(608) 00:18:53.973 fused_ordering(609) 00:18:53.973 fused_ordering(610) 00:18:53.973 fused_ordering(611) 00:18:53.973 fused_ordering(612) 00:18:53.973 fused_ordering(613) 00:18:53.973 fused_ordering(614) 00:18:53.973 fused_ordering(615) 00:18:54.538 fused_ordering(616) 00:18:54.538 fused_ordering(617) 00:18:54.538 fused_ordering(618) 00:18:54.538 fused_ordering(619) 00:18:54.538 fused_ordering(620) 00:18:54.538 fused_ordering(621) 00:18:54.538 fused_ordering(622) 00:18:54.538 fused_ordering(623) 00:18:54.538 fused_ordering(624) 00:18:54.538 fused_ordering(625) 00:18:54.538 fused_ordering(626) 00:18:54.538 fused_ordering(627) 00:18:54.538 fused_ordering(628) 00:18:54.538 fused_ordering(629) 00:18:54.538 fused_ordering(630) 00:18:54.538 fused_ordering(631) 00:18:54.538 fused_ordering(632) 00:18:54.538 fused_ordering(633) 00:18:54.538 fused_ordering(634) 00:18:54.538 fused_ordering(635) 00:18:54.538 fused_ordering(636) 00:18:54.538 fused_ordering(637) 00:18:54.538 fused_ordering(638) 00:18:54.538 fused_ordering(639) 00:18:54.538 fused_ordering(640) 00:18:54.538 fused_ordering(641) 00:18:54.538 fused_ordering(642) 00:18:54.538 fused_ordering(643) 00:18:54.538 fused_ordering(644) 00:18:54.538 fused_ordering(645) 00:18:54.538 fused_ordering(646) 00:18:54.538 fused_ordering(647) 00:18:54.538 fused_ordering(648) 00:18:54.538 fused_ordering(649) 00:18:54.538 fused_ordering(650) 00:18:54.538 fused_ordering(651) 00:18:54.538 fused_ordering(652) 00:18:54.538 fused_ordering(653) 00:18:54.538 fused_ordering(654) 00:18:54.538 fused_ordering(655) 00:18:54.538 fused_ordering(656) 00:18:54.538 fused_ordering(657) 00:18:54.538 fused_ordering(658) 00:18:54.538 fused_ordering(659) 00:18:54.538 fused_ordering(660) 00:18:54.538 fused_ordering(661) 00:18:54.538 fused_ordering(662) 00:18:54.538 fused_ordering(663) 00:18:54.538 fused_ordering(664) 00:18:54.538 fused_ordering(665) 00:18:54.538 fused_ordering(666) 00:18:54.538 fused_ordering(667) 00:18:54.538 fused_ordering(668) 00:18:54.538 fused_ordering(669) 00:18:54.538 fused_ordering(670) 00:18:54.538 fused_ordering(671) 00:18:54.538 fused_ordering(672) 00:18:54.538 fused_ordering(673) 00:18:54.538 fused_ordering(674) 00:18:54.538 fused_ordering(675) 00:18:54.538 fused_ordering(676) 00:18:54.538 fused_ordering(677) 00:18:54.538 fused_ordering(678) 00:18:54.538 fused_ordering(679) 00:18:54.538 fused_ordering(680) 00:18:54.538 fused_ordering(681) 00:18:54.538 fused_ordering(682) 00:18:54.538 fused_ordering(683) 00:18:54.538 fused_ordering(684) 00:18:54.538 fused_ordering(685) 00:18:54.538 fused_ordering(686) 00:18:54.538 fused_ordering(687) 00:18:54.538 fused_ordering(688) 00:18:54.538 fused_ordering(689) 00:18:54.538 fused_ordering(690) 00:18:54.538 fused_ordering(691) 00:18:54.538 fused_ordering(692) 00:18:54.538 fused_ordering(693) 00:18:54.538 fused_ordering(694) 00:18:54.538 fused_ordering(695) 00:18:54.538 fused_ordering(696) 00:18:54.538 fused_ordering(697) 00:18:54.538 fused_ordering(698) 00:18:54.538 fused_ordering(699) 00:18:54.538 fused_ordering(700) 00:18:54.538 fused_ordering(701) 00:18:54.538 fused_ordering(702) 00:18:54.538 fused_ordering(703) 00:18:54.538 fused_ordering(704) 00:18:54.538 fused_ordering(705) 00:18:54.538 fused_ordering(706) 00:18:54.538 fused_ordering(707) 00:18:54.538 fused_ordering(708) 00:18:54.538 fused_ordering(709) 00:18:54.538 fused_ordering(710) 00:18:54.538 fused_ordering(711) 00:18:54.538 fused_ordering(712) 00:18:54.538 fused_ordering(713) 00:18:54.538 fused_ordering(714) 00:18:54.538 fused_ordering(715) 00:18:54.538 fused_ordering(716) 00:18:54.538 fused_ordering(717) 00:18:54.538 fused_ordering(718) 00:18:54.538 fused_ordering(719) 00:18:54.538 fused_ordering(720) 00:18:54.538 fused_ordering(721) 00:18:54.538 fused_ordering(722) 00:18:54.538 fused_ordering(723) 00:18:54.538 fused_ordering(724) 00:18:54.538 fused_ordering(725) 00:18:54.538 fused_ordering(726) 00:18:54.538 fused_ordering(727) 00:18:54.538 fused_ordering(728) 00:18:54.538 fused_ordering(729) 00:18:54.538 fused_ordering(730) 00:18:54.538 fused_ordering(731) 00:18:54.538 fused_ordering(732) 00:18:54.538 fused_ordering(733) 00:18:54.538 fused_ordering(734) 00:18:54.538 fused_ordering(735) 00:18:54.539 fused_ordering(736) 00:18:54.539 fused_ordering(737) 00:18:54.539 fused_ordering(738) 00:18:54.539 fused_ordering(739) 00:18:54.539 fused_ordering(740) 00:18:54.539 fused_ordering(741) 00:18:54.539 fused_ordering(742) 00:18:54.539 fused_ordering(743) 00:18:54.539 fused_ordering(744) 00:18:54.539 fused_ordering(745) 00:18:54.539 fused_ordering(746) 00:18:54.539 fused_ordering(747) 00:18:54.539 fused_ordering(748) 00:18:54.539 fused_ordering(749) 00:18:54.539 fused_ordering(750) 00:18:54.539 fused_ordering(751) 00:18:54.539 fused_ordering(752) 00:18:54.539 fused_ordering(753) 00:18:54.539 fused_ordering(754) 00:18:54.539 fused_ordering(755) 00:18:54.539 fused_ordering(756) 00:18:54.539 fused_ordering(757) 00:18:54.539 fused_ordering(758) 00:18:54.539 fused_ordering(759) 00:18:54.539 fused_ordering(760) 00:18:54.539 fused_ordering(761) 00:18:54.539 fused_ordering(762) 00:18:54.539 fused_ordering(763) 00:18:54.539 fused_ordering(764) 00:18:54.539 fused_ordering(765) 00:18:54.539 fused_ordering(766) 00:18:54.539 fused_ordering(767) 00:18:54.539 fused_ordering(768) 00:18:54.539 fused_ordering(769) 00:18:54.539 fused_ordering(770) 00:18:54.539 fused_ordering(771) 00:18:54.539 fused_ordering(772) 00:18:54.539 fused_ordering(773) 00:18:54.539 fused_ordering(774) 00:18:54.539 fused_ordering(775) 00:18:54.539 fused_ordering(776) 00:18:54.539 fused_ordering(777) 00:18:54.539 fused_ordering(778) 00:18:54.539 fused_ordering(779) 00:18:54.539 fused_ordering(780) 00:18:54.539 fused_ordering(781) 00:18:54.539 fused_ordering(782) 00:18:54.539 fused_ordering(783) 00:18:54.539 fused_ordering(784) 00:18:54.539 fused_ordering(785) 00:18:54.539 fused_ordering(786) 00:18:54.539 fused_ordering(787) 00:18:54.539 fused_ordering(788) 00:18:54.539 fused_ordering(789) 00:18:54.539 fused_ordering(790) 00:18:54.539 fused_ordering(791) 00:18:54.539 fused_ordering(792) 00:18:54.539 fused_ordering(793) 00:18:54.539 fused_ordering(794) 00:18:54.539 fused_ordering(795) 00:18:54.539 fused_ordering(796) 00:18:54.539 fused_ordering(797) 00:18:54.539 fused_ordering(798) 00:18:54.539 fused_ordering(799) 00:18:54.539 fused_ordering(800) 00:18:54.539 fused_ordering(801) 00:18:54.539 fused_ordering(802) 00:18:54.539 fused_ordering(803) 00:18:54.539 fused_ordering(804) 00:18:54.539 fused_ordering(805) 00:18:54.539 fused_ordering(806) 00:18:54.539 fused_ordering(807) 00:18:54.539 fused_ordering(808) 00:18:54.539 fused_ordering(809) 00:18:54.539 fused_ordering(810) 00:18:54.539 fused_ordering(811) 00:18:54.539 fused_ordering(812) 00:18:54.539 fused_ordering(813) 00:18:54.539 fused_ordering(814) 00:18:54.539 fused_ordering(815) 00:18:54.539 fused_ordering(816) 00:18:54.539 fused_ordering(817) 00:18:54.539 fused_ordering(818) 00:18:54.539 fused_ordering(819) 00:18:54.539 fused_ordering(820) 00:18:55.105 fused_ordering(821) 00:18:55.105 fused_ordering(822) 00:18:55.105 fused_ordering(823) 00:18:55.105 fused_ordering(824) 00:18:55.105 fused_ordering(825) 00:18:55.105 fused_ordering(826) 00:18:55.105 fused_ordering(827) 00:18:55.105 fused_ordering(828) 00:18:55.105 fused_ordering(829) 00:18:55.105 fused_ordering(830) 00:18:55.105 fused_ordering(831) 00:18:55.105 fused_ordering(832) 00:18:55.105 fused_ordering(833) 00:18:55.105 fused_ordering(834) 00:18:55.105 fused_ordering(835) 00:18:55.105 fused_ordering(836) 00:18:55.105 fused_ordering(837) 00:18:55.105 fused_ordering(838) 00:18:55.105 fused_ordering(839) 00:18:55.105 fused_ordering(840) 00:18:55.105 fused_ordering(841) 00:18:55.105 fused_ordering(842) 00:18:55.105 fused_ordering(843) 00:18:55.105 fused_ordering(844) 00:18:55.105 fused_ordering(845) 00:18:55.105 fused_ordering(846) 00:18:55.105 fused_ordering(847) 00:18:55.105 fused_ordering(848) 00:18:55.105 fused_ordering(849) 00:18:55.105 fused_ordering(850) 00:18:55.105 fused_ordering(851) 00:18:55.105 fused_ordering(852) 00:18:55.105 fused_ordering(853) 00:18:55.105 fused_ordering(854) 00:18:55.105 fused_ordering(855) 00:18:55.105 fused_ordering(856) 00:18:55.105 fused_ordering(857) 00:18:55.105 fused_ordering(858) 00:18:55.105 fused_ordering(859) 00:18:55.105 fused_ordering(860) 00:18:55.105 fused_ordering(861) 00:18:55.105 fused_ordering(862) 00:18:55.105 fused_ordering(863) 00:18:55.105 fused_ordering(864) 00:18:55.105 fused_ordering(865) 00:18:55.105 fused_ordering(866) 00:18:55.105 fused_ordering(867) 00:18:55.105 fused_ordering(868) 00:18:55.105 fused_ordering(869) 00:18:55.105 fused_ordering(870) 00:18:55.105 fused_ordering(871) 00:18:55.105 fused_ordering(872) 00:18:55.105 fused_ordering(873) 00:18:55.105 fused_ordering(874) 00:18:55.105 fused_ordering(875) 00:18:55.105 fused_ordering(876) 00:18:55.105 fused_ordering(877) 00:18:55.105 fused_ordering(878) 00:18:55.105 fused_ordering(879) 00:18:55.105 fused_ordering(880) 00:18:55.105 fused_ordering(881) 00:18:55.105 fused_ordering(882) 00:18:55.105 fused_ordering(883) 00:18:55.105 fused_ordering(884) 00:18:55.105 fused_ordering(885) 00:18:55.105 fused_ordering(886) 00:18:55.105 fused_ordering(887) 00:18:55.105 fused_ordering(888) 00:18:55.105 fused_ordering(889) 00:18:55.105 fused_ordering(890) 00:18:55.105 fused_ordering(891) 00:18:55.105 fused_ordering(892) 00:18:55.105 fused_ordering(893) 00:18:55.105 fused_ordering(894) 00:18:55.105 fused_ordering(895) 00:18:55.105 fused_ordering(896) 00:18:55.105 fused_ordering(897) 00:18:55.105 fused_ordering(898) 00:18:55.105 fused_ordering(899) 00:18:55.105 fused_ordering(900) 00:18:55.105 fused_ordering(901) 00:18:55.105 fused_ordering(902) 00:18:55.105 fused_ordering(903) 00:18:55.105 fused_ordering(904) 00:18:55.105 fused_ordering(905) 00:18:55.105 fused_ordering(906) 00:18:55.105 fused_ordering(907) 00:18:55.105 fused_ordering(908) 00:18:55.105 fused_ordering(909) 00:18:55.105 fused_ordering(910) 00:18:55.105 fused_ordering(911) 00:18:55.105 fused_ordering(912) 00:18:55.105 fused_ordering(913) 00:18:55.105 fused_ordering(914) 00:18:55.105 fused_ordering(915) 00:18:55.105 fused_ordering(916) 00:18:55.105 fused_ordering(917) 00:18:55.105 fused_ordering(918) 00:18:55.105 fused_ordering(919) 00:18:55.105 fused_ordering(920) 00:18:55.105 fused_ordering(921) 00:18:55.105 fused_ordering(922) 00:18:55.105 fused_ordering(923) 00:18:55.105 fused_ordering(924) 00:18:55.105 fused_ordering(925) 00:18:55.105 fused_ordering(926) 00:18:55.105 fused_ordering(927) 00:18:55.105 fused_ordering(928) 00:18:55.105 fused_ordering(929) 00:18:55.105 fused_ordering(930) 00:18:55.105 fused_ordering(931) 00:18:55.105 fused_ordering(932) 00:18:55.105 fused_ordering(933) 00:18:55.105 fused_ordering(934) 00:18:55.105 fused_ordering(935) 00:18:55.105 fused_ordering(936) 00:18:55.105 fused_ordering(937) 00:18:55.105 fused_ordering(938) 00:18:55.105 fused_ordering(939) 00:18:55.105 fused_ordering(940) 00:18:55.105 fused_ordering(941) 00:18:55.105 fused_ordering(942) 00:18:55.105 fused_ordering(943) 00:18:55.105 fused_ordering(944) 00:18:55.105 fused_ordering(945) 00:18:55.105 fused_ordering(946) 00:18:55.105 fused_ordering(947) 00:18:55.105 fused_ordering(948) 00:18:55.105 fused_ordering(949) 00:18:55.105 fused_ordering(950) 00:18:55.105 fused_ordering(951) 00:18:55.105 fused_ordering(952) 00:18:55.105 fused_ordering(953) 00:18:55.105 fused_ordering(954) 00:18:55.105 fused_ordering(955) 00:18:55.105 fused_ordering(956) 00:18:55.105 fused_ordering(957) 00:18:55.105 fused_ordering(958) 00:18:55.105 fused_ordering(959) 00:18:55.105 fused_ordering(960) 00:18:55.105 fused_ordering(961) 00:18:55.105 fused_ordering(962) 00:18:55.105 fused_ordering(963) 00:18:55.105 fused_ordering(964) 00:18:55.105 fused_ordering(965) 00:18:55.105 fused_ordering(966) 00:18:55.105 fused_ordering(967) 00:18:55.105 fused_ordering(968) 00:18:55.105 fused_ordering(969) 00:18:55.105 fused_ordering(970) 00:18:55.105 fused_ordering(971) 00:18:55.105 fused_ordering(972) 00:18:55.105 fused_ordering(973) 00:18:55.105 fused_ordering(974) 00:18:55.105 fused_ordering(975) 00:18:55.105 fused_ordering(976) 00:18:55.105 fused_ordering(977) 00:18:55.105 fused_ordering(978) 00:18:55.105 fused_ordering(979) 00:18:55.105 fused_ordering(980) 00:18:55.105 fused_ordering(981) 00:18:55.105 fused_ordering(982) 00:18:55.105 fused_ordering(983) 00:18:55.105 fused_ordering(984) 00:18:55.105 fused_ordering(985) 00:18:55.105 fused_ordering(986) 00:18:55.105 fused_ordering(987) 00:18:55.105 fused_ordering(988) 00:18:55.105 fused_ordering(989) 00:18:55.105 fused_ordering(990) 00:18:55.105 fused_ordering(991) 00:18:55.105 fused_ordering(992) 00:18:55.105 fused_ordering(993) 00:18:55.105 fused_ordering(994) 00:18:55.105 fused_ordering(995) 00:18:55.105 fused_ordering(996) 00:18:55.105 fused_ordering(997) 00:18:55.105 fused_ordering(998) 00:18:55.105 fused_ordering(999) 00:18:55.105 fused_ordering(1000) 00:18:55.105 fused_ordering(1001) 00:18:55.105 fused_ordering(1002) 00:18:55.105 fused_ordering(1003) 00:18:55.105 fused_ordering(1004) 00:18:55.105 fused_ordering(1005) 00:18:55.105 fused_ordering(1006) 00:18:55.105 fused_ordering(1007) 00:18:55.105 fused_ordering(1008) 00:18:55.105 fused_ordering(1009) 00:18:55.105 fused_ordering(1010) 00:18:55.105 fused_ordering(1011) 00:18:55.105 fused_ordering(1012) 00:18:55.105 fused_ordering(1013) 00:18:55.105 fused_ordering(1014) 00:18:55.105 fused_ordering(1015) 00:18:55.105 fused_ordering(1016) 00:18:55.105 fused_ordering(1017) 00:18:55.105 fused_ordering(1018) 00:18:55.105 fused_ordering(1019) 00:18:55.105 fused_ordering(1020) 00:18:55.105 fused_ordering(1021) 00:18:55.105 fused_ordering(1022) 00:18:55.105 fused_ordering(1023) 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.105 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.105 rmmod nvme_tcp 00:18:55.364 rmmod nvme_fabrics 00:18:55.364 rmmod nvme_keyring 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 634025 ']' 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 634025 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 634025 ']' 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 634025 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 634025 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 634025' 00:18:55.364 killing process with pid 634025 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 634025 00:18:55.364 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 634025 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.625 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.529 00:18:57.529 real 0m7.380s 00:18:57.529 user 0m4.920s 00:18:57.529 sys 0m3.074s 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 ************************************ 00:18:57.529 END TEST nvmf_fused_ordering 00:18:57.529 ************************************ 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.529 ************************************ 00:18:57.529 START TEST nvmf_ns_masking 00:18:57.529 ************************************ 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:57.529 * Looking for test storage... 00:18:57.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:18:57.529 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.788 --rc genhtml_branch_coverage=1 00:18:57.788 --rc genhtml_function_coverage=1 00:18:57.788 --rc genhtml_legend=1 00:18:57.788 --rc geninfo_all_blocks=1 00:18:57.788 --rc geninfo_unexecuted_blocks=1 00:18:57.788 00:18:57.788 ' 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.788 --rc genhtml_branch_coverage=1 00:18:57.788 --rc genhtml_function_coverage=1 00:18:57.788 --rc genhtml_legend=1 00:18:57.788 --rc geninfo_all_blocks=1 00:18:57.788 --rc geninfo_unexecuted_blocks=1 00:18:57.788 00:18:57.788 ' 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.788 --rc genhtml_branch_coverage=1 00:18:57.788 --rc genhtml_function_coverage=1 00:18:57.788 --rc genhtml_legend=1 00:18:57.788 --rc geninfo_all_blocks=1 00:18:57.788 --rc geninfo_unexecuted_blocks=1 00:18:57.788 00:18:57.788 ' 00:18:57.788 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:57.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.788 --rc genhtml_branch_coverage=1 00:18:57.788 --rc genhtml_function_coverage=1 00:18:57.788 --rc genhtml_legend=1 00:18:57.788 --rc geninfo_all_blocks=1 00:18:57.789 --rc geninfo_unexecuted_blocks=1 00:18:57.789 00:18:57.789 ' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8ea505f7-012f-4dd5-9676-9775363b9d5a 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b16e48db-0f1b-4336-bef3-ae7ec4bf8406 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5634ecf7-36da-4aee-9e40-8cf752098769 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.789 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:00.323 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:00.323 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:00.323 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:00.323 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:00.323 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:00.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:19:00.324 00:19:00.324 --- 10.0.0.2 ping statistics --- 00:19:00.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.324 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:19:00.324 00:19:00.324 --- 10.0.0.1 ping statistics --- 00:19:00.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.324 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=636356 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 636356 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 636356 ']' 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.324 [2024-11-05 12:33:29.273741] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:19:00.324 [2024-11-05 12:33:29.273836] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.324 [2024-11-05 12:33:29.346291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.324 [2024-11-05 12:33:29.390730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.324 [2024-11-05 12:33:29.390791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.324 [2024-11-05 12:33:29.390819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.324 [2024-11-05 12:33:29.390829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.324 [2024-11-05 12:33:29.390839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.324 [2024-11-05 12:33:29.391458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.324 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.615 [2024-11-05 12:33:29.772874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.615 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:00.615 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:00.616 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:00.873 Malloc1 00:19:00.873 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:01.131 Malloc2 00:19:01.389 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.647 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:01.904 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.163 [2024-11-05 12:33:31.164437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5634ecf7-36da-4aee-9e40-8cf752098769 -a 10.0.0.2 -s 4420 -i 4 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:19:02.163 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.690 [ 0]:0x1 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e1601fa57e584263bb114c3a922a2b8d 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e1601fa57e584263bb114c3a922a2b8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.690 [ 0]:0x1 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e1601fa57e584263bb114c3a922a2b8d 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e1601fa57e584263bb114c3a922a2b8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:04.690 [ 1]:0x2 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.690 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.948 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:05.205 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:05.205 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5634ecf7-36da-4aee-9e40-8cf752098769 -a 10.0.0.2 -s 4420 -i 4 00:19:05.463 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:05.463 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:05.463 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.463 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:19:05.463 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:19:05.463 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.992 [ 0]:0x2 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.992 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:07.992 [ 0]:0x1 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e1601fa57e584263bb114c3a922a2b8d 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e1601fa57e584263bb114c3a922a2b8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:07.992 [ 1]:0x2 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:07.992 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.250 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.508 [ 0]:0x2 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.508 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.766 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:08.766 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5634ecf7-36da-4aee-9e40-8cf752098769 -a 10.0.0.2 -s 4420 -i 4 00:19:09.030 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:09.030 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:19:09.030 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.030 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:09.030 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:09.030 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:10.984 [ 0]:0x1 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e1601fa57e584263bb114c3a922a2b8d 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e1601fa57e584263bb114c3a922a2b8d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:10.984 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.241 [ 1]:0x2 00:19:11.241 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.241 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.241 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:11.241 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.241 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.500 [ 0]:0x2 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:11.500 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.758 [2024-11-05 12:33:40.934005] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:11.758 request: 00:19:11.758 { 00:19:11.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.758 "nsid": 2, 00:19:11.758 "host": "nqn.2016-06.io.spdk:host1", 00:19:11.758 "method": "nvmf_ns_remove_host", 00:19:11.758 "req_id": 1 00:19:11.758 } 00:19:11.758 Got JSON-RPC error response 00:19:11.758 response: 00:19:11.758 { 00:19:11.758 "code": -32602, 00:19:11.758 "message": "Invalid parameters" 00:19:11.758 } 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.758 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:12.016 [ 0]:0x2 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c5428a6e0cb348ec9323231acf4f120d 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c5428a6e0cb348ec9323231acf4f120d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:12.016 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:12.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=637874 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 637874 /var/tmp/host.sock 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 637874 ']' 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:12.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:12.017 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:12.017 [2024-11-05 12:33:41.155299] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:19:12.017 [2024-11-05 12:33:41.155380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637874 ] 00:19:12.017 [2024-11-05 12:33:41.222398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.275 [2024-11-05 12:33:41.268904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.275 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:12.275 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:19:12.275 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.841 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:12.841 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8ea505f7-012f-4dd5-9676-9775363b9d5a 00:19:12.841 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:12.841 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8EA505F7012F4DD596769775363B9D5A -i 00:19:13.098 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b16e48db-0f1b-4336-bef3-ae7ec4bf8406 00:19:13.098 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:13.098 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B16E48DB0F1B4336BEF3AE7EC4BF8406 -i 00:19:13.356 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:13.922 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:13.922 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:13.922 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:14.488 nvme0n1 00:19:14.488 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:14.488 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:14.746 nvme1n2 00:19:14.746 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:14.746 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:14.746 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:14.746 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:14.746 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:15.004 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:15.004 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:15.004 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:15.004 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:15.262 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8ea505f7-012f-4dd5-9676-9775363b9d5a == \8\e\a\5\0\5\f\7\-\0\1\2\f\-\4\d\d\5\-\9\6\7\6\-\9\7\7\5\3\6\3\b\9\d\5\a ]] 00:19:15.262 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:15.262 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:15.262 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:15.520 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b16e48db-0f1b-4336-bef3-ae7ec4bf8406 == \b\1\6\e\4\8\d\b\-\0\f\1\b\-\4\3\3\6\-\b\e\f\3\-\a\e\7\e\c\4\b\f\8\4\0\6 ]] 00:19:15.520 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8ea505f7-012f-4dd5-9676-9775363b9d5a 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8EA505F7012F4DD596769775363B9D5A 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8EA505F7012F4DD596769775363B9D5A 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:16.085 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8EA505F7012F4DD596769775363B9D5A 00:19:16.342 [2024-11-05 12:33:45.551645] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:16.343 [2024-11-05 12:33:45.551685] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:16.343 [2024-11-05 12:33:45.551715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:16.343 request: 00:19:16.343 { 00:19:16.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.343 "namespace": { 00:19:16.343 "bdev_name": "invalid", 00:19:16.343 "nsid": 1, 00:19:16.343 "nguid": "8EA505F7012F4DD596769775363B9D5A", 00:19:16.343 "no_auto_visible": false 00:19:16.343 }, 00:19:16.343 "method": "nvmf_subsystem_add_ns", 00:19:16.343 "req_id": 1 00:19:16.343 } 00:19:16.343 Got JSON-RPC error response 00:19:16.343 response: 00:19:16.343 { 00:19:16.343 "code": -32602, 00:19:16.343 "message": "Invalid parameters" 00:19:16.343 } 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8ea505f7-012f-4dd5-9676-9775363b9d5a 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:16.343 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8EA505F7012F4DD596769775363B9D5A -i 00:19:16.908 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:18.804 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:18.804 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:18.804 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 637874 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 637874 ']' 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 637874 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 637874 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 637874' 00:19:19.062 killing process with pid 637874 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 637874 00:19:19.062 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 637874 00:19:19.320 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.884 rmmod nvme_tcp 00:19:19.884 rmmod nvme_fabrics 00:19:19.884 rmmod nvme_keyring 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 636356 ']' 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 636356 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 636356 ']' 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 636356 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 636356 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 636356' 00:19:19.884 killing process with pid 636356 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 636356 00:19:19.884 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 636356 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.142 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:22.048 00:19:22.048 real 0m24.524s 00:19:22.048 user 0m35.597s 00:19:22.048 sys 0m4.537s 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:22.048 ************************************ 00:19:22.048 END TEST nvmf_ns_masking 00:19:22.048 ************************************ 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.048 ************************************ 00:19:22.048 START TEST nvmf_nvme_cli 00:19:22.048 ************************************ 00:19:22.048 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:22.307 * Looking for test storage... 00:19:22.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.307 --rc genhtml_branch_coverage=1 00:19:22.307 --rc genhtml_function_coverage=1 00:19:22.307 --rc genhtml_legend=1 00:19:22.307 --rc geninfo_all_blocks=1 00:19:22.307 --rc geninfo_unexecuted_blocks=1 00:19:22.307 00:19:22.307 ' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.307 --rc genhtml_branch_coverage=1 00:19:22.307 --rc genhtml_function_coverage=1 00:19:22.307 --rc genhtml_legend=1 00:19:22.307 --rc geninfo_all_blocks=1 00:19:22.307 --rc geninfo_unexecuted_blocks=1 00:19:22.307 00:19:22.307 ' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.307 --rc genhtml_branch_coverage=1 00:19:22.307 --rc genhtml_function_coverage=1 00:19:22.307 --rc genhtml_legend=1 00:19:22.307 --rc geninfo_all_blocks=1 00:19:22.307 --rc geninfo_unexecuted_blocks=1 00:19:22.307 00:19:22.307 ' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.307 --rc genhtml_branch_coverage=1 00:19:22.307 --rc genhtml_function_coverage=1 00:19:22.307 --rc genhtml_legend=1 00:19:22.307 --rc geninfo_all_blocks=1 00:19:22.307 --rc geninfo_unexecuted_blocks=1 00:19:22.307 00:19:22.307 ' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.307 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.308 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.839 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:19:24.840 00:19:24.840 --- 10.0.0.2 ping statistics --- 00:19:24.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.840 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:19:24.840 00:19:24.840 --- 10.0.0.1 ping statistics --- 00:19:24.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.840 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=640784 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 640784 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 640784 ']' 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:24.840 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.840 [2024-11-05 12:33:53.856170] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:19:24.840 [2024-11-05 12:33:53.856261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.840 [2024-11-05 12:33:53.931366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.840 [2024-11-05 12:33:53.978513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.840 [2024-11-05 12:33:53.978559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.840 [2024-11-05 12:33:53.978587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.840 [2024-11-05 12:33:53.978598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.840 [2024-11-05 12:33:53.978607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.840 [2024-11-05 12:33:53.980153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.840 [2024-11-05 12:33:53.980243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.840 [2024-11-05 12:33:53.980308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.840 [2024-11-05 12:33:53.980311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 [2024-11-05 12:33:54.120773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 Malloc0 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 Malloc1 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 [2024-11-05 12:33:54.220791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.098 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:19:25.356 00:19:25.356 Discovery Log Number of Records 2, Generation counter 2 00:19:25.356 =====Discovery Log Entry 0====== 00:19:25.356 trtype: tcp 00:19:25.356 adrfam: ipv4 00:19:25.356 subtype: current discovery subsystem 00:19:25.356 treq: not required 00:19:25.356 portid: 0 00:19:25.356 trsvcid: 4420 00:19:25.356 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:25.356 traddr: 10.0.0.2 00:19:25.356 eflags: explicit discovery connections, duplicate discovery information 00:19:25.356 sectype: none 00:19:25.356 =====Discovery Log Entry 1====== 00:19:25.356 trtype: tcp 00:19:25.356 adrfam: ipv4 00:19:25.356 subtype: nvme subsystem 00:19:25.356 treq: not required 00:19:25.356 portid: 0 00:19:25.356 trsvcid: 4420 00:19:25.356 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:25.356 traddr: 10.0.0.2 00:19:25.356 eflags: none 00:19:25.356 sectype: none 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:25.356 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:25.922 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:25.922 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:19:25.922 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.922 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:19:25.922 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:19:25.922 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:28.450 /dev/nvme0n2 ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:28.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:28.450 rmmod nvme_tcp 00:19:28.450 rmmod nvme_fabrics 00:19:28.450 rmmod nvme_keyring 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 640784 ']' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 640784 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 640784 ']' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 640784 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 640784 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 640784' 00:19:28.450 killing process with pid 640784 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 640784 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 640784 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.450 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:30.987 00:19:30.987 real 0m8.383s 00:19:30.987 user 0m15.185s 00:19:30.987 sys 0m2.373s 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.987 ************************************ 00:19:30.987 END TEST nvmf_nvme_cli 00:19:30.987 ************************************ 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.987 ************************************ 00:19:30.987 START TEST nvmf_vfio_user 00:19:30.987 ************************************ 00:19:30.987 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:30.987 * Looking for test storage... 00:19:30.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.988 --rc genhtml_branch_coverage=1 00:19:30.988 --rc genhtml_function_coverage=1 00:19:30.988 --rc genhtml_legend=1 00:19:30.988 --rc geninfo_all_blocks=1 00:19:30.988 --rc geninfo_unexecuted_blocks=1 00:19:30.988 00:19:30.988 ' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.988 --rc genhtml_branch_coverage=1 00:19:30.988 --rc genhtml_function_coverage=1 00:19:30.988 --rc genhtml_legend=1 00:19:30.988 --rc geninfo_all_blocks=1 00:19:30.988 --rc geninfo_unexecuted_blocks=1 00:19:30.988 00:19:30.988 ' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.988 --rc genhtml_branch_coverage=1 00:19:30.988 --rc genhtml_function_coverage=1 00:19:30.988 --rc genhtml_legend=1 00:19:30.988 --rc geninfo_all_blocks=1 00:19:30.988 --rc geninfo_unexecuted_blocks=1 00:19:30.988 00:19:30.988 ' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.988 --rc genhtml_branch_coverage=1 00:19:30.988 --rc genhtml_function_coverage=1 00:19:30.988 --rc genhtml_legend=1 00:19:30.988 --rc geninfo_all_blocks=1 00:19:30.988 --rc geninfo_unexecuted_blocks=1 00:19:30.988 00:19:30.988 ' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.988 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=641706 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 641706' 00:19:30.989 Process pid: 641706 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 641706 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 641706 ']' 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.989 12:33:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:30.989 [2024-11-05 12:33:59.927053] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:19:30.989 [2024-11-05 12:33:59.927127] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.989 [2024-11-05 12:33:59.996625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.989 [2024-11-05 12:34:00.052921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.989 [2024-11-05 12:34:00.052989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.989 [2024-11-05 12:34:00.053011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.989 [2024-11-05 12:34:00.053028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.989 [2024-11-05 12:34:00.053044] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.989 [2024-11-05 12:34:00.058880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.989 [2024-11-05 12:34:00.058944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.989 [2024-11-05 12:34:00.059018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.989 [2024-11-05 12:34:00.059010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:30.989 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:30.989 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:19:30.989 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:32.359 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:32.359 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:32.359 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:32.359 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:32.359 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:32.360 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:32.617 Malloc1 00:19:32.617 12:34:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:32.874 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:33.132 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:33.389 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:33.389 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:33.389 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:33.647 Malloc2 00:19:33.647 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:33.905 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:34.162 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:34.420 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:34.420 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:34.420 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:34.420 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:34.420 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:34.420 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:34.679 [2024-11-05 12:34:03.665394] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:19:34.679 [2024-11-05 12:34:03.665438] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642132 ] 00:19:34.679 [2024-11-05 12:34:03.715035] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:34.679 [2024-11-05 12:34:03.724376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:34.679 [2024-11-05 12:34:03.724406] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f79dd344000 00:19:34.679 [2024-11-05 12:34:03.725372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.726366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.727375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.728379] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.729383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.730386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.731391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.732395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:34.679 [2024-11-05 12:34:03.733400] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:34.679 [2024-11-05 12:34:03.733421] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f79dc03c000 00:19:34.679 [2024-11-05 12:34:03.734575] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:34.679 [2024-11-05 12:34:03.750209] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:34.679 [2024-11-05 12:34:03.750254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:34.679 [2024-11-05 12:34:03.752497] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:34.679 [2024-11-05 12:34:03.752548] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:34.679 [2024-11-05 12:34:03.752662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:34.679 [2024-11-05 12:34:03.752691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:34.679 [2024-11-05 12:34:03.752702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:34.679 [2024-11-05 12:34:03.753495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:34.679 [2024-11-05 12:34:03.753514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:34.679 [2024-11-05 12:34:03.753527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:34.679 [2024-11-05 12:34:03.754498] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:34.679 [2024-11-05 12:34:03.754518] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:34.679 [2024-11-05 12:34:03.754531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:34.679 [2024-11-05 12:34:03.755502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:34.679 [2024-11-05 12:34:03.755520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:34.679 [2024-11-05 12:34:03.756506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:34.680 [2024-11-05 12:34:03.756524] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:34.680 [2024-11-05 12:34:03.756533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:34.680 [2024-11-05 12:34:03.756544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:34.680 [2024-11-05 12:34:03.756653] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:34.680 [2024-11-05 12:34:03.756661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:34.680 [2024-11-05 12:34:03.756669] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:34.680 [2024-11-05 12:34:03.760879] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:34.680 [2024-11-05 12:34:03.761534] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:34.680 [2024-11-05 12:34:03.762540] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:34.680 [2024-11-05 12:34:03.763533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:34.680 [2024-11-05 12:34:03.763669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:34.680 [2024-11-05 12:34:03.764547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:34.680 [2024-11-05 12:34:03.764564] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:34.680 [2024-11-05 12:34:03.764572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.764596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:34.680 [2024-11-05 12:34:03.764609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.764633] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:34.680 [2024-11-05 12:34:03.764643] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:34.680 [2024-11-05 12:34:03.764649] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.680 [2024-11-05 12:34:03.764668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.764727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.764743] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:34.680 [2024-11-05 12:34:03.764752] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:34.680 [2024-11-05 12:34:03.764759] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:34.680 [2024-11-05 12:34:03.764766] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:34.680 [2024-11-05 12:34:03.764774] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:34.680 [2024-11-05 12:34:03.764785] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:34.680 [2024-11-05 12:34:03.764793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.764806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.764820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.764833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.764877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.680 [2024-11-05 12:34:03.764894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.680 [2024-11-05 12:34:03.764906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.680 [2024-11-05 12:34:03.764919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.680 [2024-11-05 12:34:03.764931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.764945] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.764958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.764971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.764986] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:34.680 [2024-11-05 12:34:03.764996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.765042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.765110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765154] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:34.680 [2024-11-05 12:34:03.765162] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:34.680 [2024-11-05 12:34:03.765168] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.680 [2024-11-05 12:34:03.765177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.765196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.765212] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:34.680 [2024-11-05 12:34:03.765228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:34.680 [2024-11-05 12:34:03.765261] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:34.680 [2024-11-05 12:34:03.765267] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.680 [2024-11-05 12:34:03.765276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.765298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.765320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765351] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:34.680 [2024-11-05 12:34:03.765358] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:34.680 [2024-11-05 12:34:03.765364] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.680 [2024-11-05 12:34:03.765373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:34.680 [2024-11-05 12:34:03.765385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:34.680 [2024-11-05 12:34:03.765399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:34.680 [2024-11-05 12:34:03.765450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:34.681 [2024-11-05 12:34:03.765458] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:34.681 [2024-11-05 12:34:03.765466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:34.681 [2024-11-05 12:34:03.765474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:34.681 [2024-11-05 12:34:03.765501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765539] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765629] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:34.681 [2024-11-05 12:34:03.765638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:34.681 [2024-11-05 12:34:03.765648] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:34.681 [2024-11-05 12:34:03.765654] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:34.681 [2024-11-05 12:34:03.765659] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:34.681 [2024-11-05 12:34:03.765669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:34.681 [2024-11-05 12:34:03.765680] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:34.681 [2024-11-05 12:34:03.765688] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:34.681 [2024-11-05 12:34:03.765694] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.681 [2024-11-05 12:34:03.765702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765713] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:34.681 [2024-11-05 12:34:03.765721] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:34.681 [2024-11-05 12:34:03.765726] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.681 [2024-11-05 12:34:03.765735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765750] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:34.681 [2024-11-05 12:34:03.765759] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:34.681 [2024-11-05 12:34:03.765765] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:34.681 [2024-11-05 12:34:03.765774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:34.681 [2024-11-05 12:34:03.765786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:34.681 [2024-11-05 12:34:03.765853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:34.681 ===================================================== 00:19:34.681 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:34.681 ===================================================== 00:19:34.681 Controller Capabilities/Features 00:19:34.681 ================================ 00:19:34.681 Vendor ID: 4e58 00:19:34.681 Subsystem Vendor ID: 4e58 00:19:34.681 Serial Number: SPDK1 00:19:34.681 Model Number: SPDK bdev Controller 00:19:34.681 Firmware Version: 25.01 00:19:34.681 Recommended Arb Burst: 6 00:19:34.681 IEEE OUI Identifier: 8d 6b 50 00:19:34.681 Multi-path I/O 00:19:34.681 May have multiple subsystem ports: Yes 00:19:34.681 May have multiple controllers: Yes 00:19:34.681 Associated with SR-IOV VF: No 00:19:34.681 Max Data Transfer Size: 131072 00:19:34.681 Max Number of Namespaces: 32 00:19:34.681 Max Number of I/O Queues: 127 00:19:34.681 NVMe Specification Version (VS): 1.3 00:19:34.681 NVMe Specification Version (Identify): 1.3 00:19:34.681 Maximum Queue Entries: 256 00:19:34.681 Contiguous Queues Required: Yes 00:19:34.681 Arbitration Mechanisms Supported 00:19:34.681 Weighted Round Robin: Not Supported 00:19:34.681 Vendor Specific: Not Supported 00:19:34.681 Reset Timeout: 15000 ms 00:19:34.681 Doorbell Stride: 4 bytes 00:19:34.681 NVM Subsystem Reset: Not Supported 00:19:34.681 Command Sets Supported 00:19:34.681 NVM Command Set: Supported 00:19:34.681 Boot Partition: Not Supported 00:19:34.681 Memory Page Size Minimum: 4096 bytes 00:19:34.681 Memory Page Size Maximum: 4096 bytes 00:19:34.681 Persistent Memory Region: Not Supported 00:19:34.681 Optional Asynchronous Events Supported 00:19:34.681 Namespace Attribute Notices: Supported 00:19:34.681 Firmware Activation Notices: Not Supported 00:19:34.681 ANA Change Notices: Not Supported 00:19:34.681 PLE Aggregate Log Change Notices: Not Supported 00:19:34.681 LBA Status Info Alert Notices: Not Supported 00:19:34.681 EGE Aggregate Log Change Notices: Not Supported 00:19:34.681 Normal NVM Subsystem Shutdown event: Not Supported 00:19:34.681 Zone Descriptor Change Notices: Not Supported 00:19:34.681 Discovery Log Change Notices: Not Supported 00:19:34.681 Controller Attributes 00:19:34.681 128-bit Host Identifier: Supported 00:19:34.681 Non-Operational Permissive Mode: Not Supported 00:19:34.681 NVM Sets: Not Supported 00:19:34.681 Read Recovery Levels: Not Supported 00:19:34.681 Endurance Groups: Not Supported 00:19:34.681 Predictable Latency Mode: Not Supported 00:19:34.681 Traffic Based Keep ALive: Not Supported 00:19:34.681 Namespace Granularity: Not Supported 00:19:34.681 SQ Associations: Not Supported 00:19:34.681 UUID List: Not Supported 00:19:34.681 Multi-Domain Subsystem: Not Supported 00:19:34.681 Fixed Capacity Management: Not Supported 00:19:34.681 Variable Capacity Management: Not Supported 00:19:34.681 Delete Endurance Group: Not Supported 00:19:34.681 Delete NVM Set: Not Supported 00:19:34.681 Extended LBA Formats Supported: Not Supported 00:19:34.681 Flexible Data Placement Supported: Not Supported 00:19:34.681 00:19:34.681 Controller Memory Buffer Support 00:19:34.681 ================================ 00:19:34.681 Supported: No 00:19:34.681 00:19:34.681 Persistent Memory Region Support 00:19:34.681 ================================ 00:19:34.681 Supported: No 00:19:34.681 00:19:34.681 Admin Command Set Attributes 00:19:34.681 ============================ 00:19:34.681 Security Send/Receive: Not Supported 00:19:34.681 Format NVM: Not Supported 00:19:34.681 Firmware Activate/Download: Not Supported 00:19:34.681 Namespace Management: Not Supported 00:19:34.681 Device Self-Test: Not Supported 00:19:34.681 Directives: Not Supported 00:19:34.681 NVMe-MI: Not Supported 00:19:34.681 Virtualization Management: Not Supported 00:19:34.681 Doorbell Buffer Config: Not Supported 00:19:34.681 Get LBA Status Capability: Not Supported 00:19:34.681 Command & Feature Lockdown Capability: Not Supported 00:19:34.681 Abort Command Limit: 4 00:19:34.681 Async Event Request Limit: 4 00:19:34.681 Number of Firmware Slots: N/A 00:19:34.681 Firmware Slot 1 Read-Only: N/A 00:19:34.681 Firmware Activation Without Reset: N/A 00:19:34.681 Multiple Update Detection Support: N/A 00:19:34.681 Firmware Update Granularity: No Information Provided 00:19:34.681 Per-Namespace SMART Log: No 00:19:34.681 Asymmetric Namespace Access Log Page: Not Supported 00:19:34.681 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:34.681 Command Effects Log Page: Supported 00:19:34.681 Get Log Page Extended Data: Supported 00:19:34.681 Telemetry Log Pages: Not Supported 00:19:34.681 Persistent Event Log Pages: Not Supported 00:19:34.681 Supported Log Pages Log Page: May Support 00:19:34.681 Commands Supported & Effects Log Page: Not Supported 00:19:34.681 Feature Identifiers & Effects Log Page:May Support 00:19:34.681 NVMe-MI Commands & Effects Log Page: May Support 00:19:34.681 Data Area 4 for Telemetry Log: Not Supported 00:19:34.681 Error Log Page Entries Supported: 128 00:19:34.681 Keep Alive: Supported 00:19:34.681 Keep Alive Granularity: 10000 ms 00:19:34.681 00:19:34.681 NVM Command Set Attributes 00:19:34.681 ========================== 00:19:34.681 Submission Queue Entry Size 00:19:34.681 Max: 64 00:19:34.681 Min: 64 00:19:34.681 Completion Queue Entry Size 00:19:34.681 Max: 16 00:19:34.681 Min: 16 00:19:34.681 Number of Namespaces: 32 00:19:34.681 Compare Command: Supported 00:19:34.681 Write Uncorrectable Command: Not Supported 00:19:34.681 Dataset Management Command: Supported 00:19:34.681 Write Zeroes Command: Supported 00:19:34.681 Set Features Save Field: Not Supported 00:19:34.682 Reservations: Not Supported 00:19:34.682 Timestamp: Not Supported 00:19:34.682 Copy: Supported 00:19:34.682 Volatile Write Cache: Present 00:19:34.682 Atomic Write Unit (Normal): 1 00:19:34.682 Atomic Write Unit (PFail): 1 00:19:34.682 Atomic Compare & Write Unit: 1 00:19:34.682 Fused Compare & Write: Supported 00:19:34.682 Scatter-Gather List 00:19:34.682 SGL Command Set: Supported (Dword aligned) 00:19:34.682 SGL Keyed: Not Supported 00:19:34.682 SGL Bit Bucket Descriptor: Not Supported 00:19:34.682 SGL Metadata Pointer: Not Supported 00:19:34.682 Oversized SGL: Not Supported 00:19:34.682 SGL Metadata Address: Not Supported 00:19:34.682 SGL Offset: Not Supported 00:19:34.682 Transport SGL Data Block: Not Supported 00:19:34.682 Replay Protected Memory Block: Not Supported 00:19:34.682 00:19:34.682 Firmware Slot Information 00:19:34.682 ========================= 00:19:34.682 Active slot: 1 00:19:34.682 Slot 1 Firmware Revision: 25.01 00:19:34.682 00:19:34.682 00:19:34.682 Commands Supported and Effects 00:19:34.682 ============================== 00:19:34.682 Admin Commands 00:19:34.682 -------------- 00:19:34.682 Get Log Page (02h): Supported 00:19:34.682 Identify (06h): Supported 00:19:34.682 Abort (08h): Supported 00:19:34.682 Set Features (09h): Supported 00:19:34.682 Get Features (0Ah): Supported 00:19:34.682 Asynchronous Event Request (0Ch): Supported 00:19:34.682 Keep Alive (18h): Supported 00:19:34.682 I/O Commands 00:19:34.682 ------------ 00:19:34.682 Flush (00h): Supported LBA-Change 00:19:34.682 Write (01h): Supported LBA-Change 00:19:34.682 Read (02h): Supported 00:19:34.682 Compare (05h): Supported 00:19:34.682 Write Zeroes (08h): Supported LBA-Change 00:19:34.682 Dataset Management (09h): Supported LBA-Change 00:19:34.682 Copy (19h): Supported LBA-Change 00:19:34.682 00:19:34.682 Error Log 00:19:34.682 ========= 00:19:34.682 00:19:34.682 Arbitration 00:19:34.682 =========== 00:19:34.682 Arbitration Burst: 1 00:19:34.682 00:19:34.682 Power Management 00:19:34.682 ================ 00:19:34.682 Number of Power States: 1 00:19:34.682 Current Power State: Power State #0 00:19:34.682 Power State #0: 00:19:34.682 Max Power: 0.00 W 00:19:34.682 Non-Operational State: Operational 00:19:34.682 Entry Latency: Not Reported 00:19:34.682 Exit Latency: Not Reported 00:19:34.682 Relative Read Throughput: 0 00:19:34.682 Relative Read Latency: 0 00:19:34.682 Relative Write Throughput: 0 00:19:34.682 Relative Write Latency: 0 00:19:34.682 Idle Power: Not Reported 00:19:34.682 Active Power: Not Reported 00:19:34.682 Non-Operational Permissive Mode: Not Supported 00:19:34.682 00:19:34.682 Health Information 00:19:34.682 ================== 00:19:34.682 Critical Warnings: 00:19:34.682 Available Spare Space: OK 00:19:34.682 Temperature: OK 00:19:34.682 Device Reliability: OK 00:19:34.682 Read Only: No 00:19:34.682 Volatile Memory Backup: OK 00:19:34.682 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:34.682 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:34.682 Available Spare: 0% 00:19:34.682 Available Sp[2024-11-05 12:34:03.765993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:34.682 [2024-11-05 12:34:03.766011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:34.682 [2024-11-05 12:34:03.766056] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:34.682 [2024-11-05 12:34:03.766074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.682 [2024-11-05 12:34:03.766085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.682 [2024-11-05 12:34:03.766095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.682 [2024-11-05 12:34:03.766105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.682 [2024-11-05 12:34:03.766563] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:34.682 [2024-11-05 12:34:03.766588] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:34.682 [2024-11-05 12:34:03.767564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:34.682 [2024-11-05 12:34:03.767639] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:34.682 [2024-11-05 12:34:03.767653] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:34.682 [2024-11-05 12:34:03.768569] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:34.682 [2024-11-05 12:34:03.768590] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:34.682 [2024-11-05 12:34:03.768643] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:34.682 [2024-11-05 12:34:03.770607] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:34.682 are Threshold: 0% 00:19:34.682 Life Percentage Used: 0% 00:19:34.682 Data Units Read: 0 00:19:34.682 Data Units Written: 0 00:19:34.682 Host Read Commands: 0 00:19:34.682 Host Write Commands: 0 00:19:34.682 Controller Busy Time: 0 minutes 00:19:34.682 Power Cycles: 0 00:19:34.682 Power On Hours: 0 hours 00:19:34.682 Unsafe Shutdowns: 0 00:19:34.682 Unrecoverable Media Errors: 0 00:19:34.682 Lifetime Error Log Entries: 0 00:19:34.682 Warning Temperature Time: 0 minutes 00:19:34.682 Critical Temperature Time: 0 minutes 00:19:34.682 00:19:34.682 Number of Queues 00:19:34.682 ================ 00:19:34.682 Number of I/O Submission Queues: 127 00:19:34.682 Number of I/O Completion Queues: 127 00:19:34.682 00:19:34.682 Active Namespaces 00:19:34.682 ================= 00:19:34.682 Namespace ID:1 00:19:34.682 Error Recovery Timeout: Unlimited 00:19:34.682 Command Set Identifier: NVM (00h) 00:19:34.682 Deallocate: Supported 00:19:34.682 Deallocated/Unwritten Error: Not Supported 00:19:34.682 Deallocated Read Value: Unknown 00:19:34.682 Deallocate in Write Zeroes: Not Supported 00:19:34.682 Deallocated Guard Field: 0xFFFF 00:19:34.682 Flush: Supported 00:19:34.682 Reservation: Supported 00:19:34.682 Namespace Sharing Capabilities: Multiple Controllers 00:19:34.682 Size (in LBAs): 131072 (0GiB) 00:19:34.682 Capacity (in LBAs): 131072 (0GiB) 00:19:34.682 Utilization (in LBAs): 131072 (0GiB) 00:19:34.682 NGUID: A3342399AB7F45DFA338234649E5142D 00:19:34.682 UUID: a3342399-ab7f-45df-a338-234649e5142d 00:19:34.682 Thin Provisioning: Not Supported 00:19:34.682 Per-NS Atomic Units: Yes 00:19:34.682 Atomic Boundary Size (Normal): 0 00:19:34.682 Atomic Boundary Size (PFail): 0 00:19:34.682 Atomic Boundary Offset: 0 00:19:34.682 Maximum Single Source Range Length: 65535 00:19:34.682 Maximum Copy Length: 65535 00:19:34.682 Maximum Source Range Count: 1 00:19:34.682 NGUID/EUI64 Never Reused: No 00:19:34.682 Namespace Write Protected: No 00:19:34.682 Number of LBA Formats: 1 00:19:34.682 Current LBA Format: LBA Format #00 00:19:34.682 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:34.682 00:19:34.682 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:34.940 [2024-11-05 12:34:04.022782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.200 Initializing NVMe Controllers 00:19:40.200 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:40.200 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:40.200 Initialization complete. Launching workers. 00:19:40.200 ======================================================== 00:19:40.200 Latency(us) 00:19:40.200 Device Information : IOPS MiB/s Average min max 00:19:40.200 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32968.40 128.78 3884.15 1191.19 8283.74 00:19:40.200 ======================================================== 00:19:40.200 Total : 32968.40 128.78 3884.15 1191.19 8283.74 00:19:40.200 00:19:40.200 [2024-11-05 12:34:09.046331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.200 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:40.200 [2024-11-05 12:34:09.292492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:45.461 Initializing NVMe Controllers 00:19:45.461 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:45.461 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:45.461 Initialization complete. Launching workers. 00:19:45.461 ======================================================== 00:19:45.461 Latency(us) 00:19:45.461 Device Information : IOPS MiB/s Average min max 00:19:45.461 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15949.12 62.30 8030.84 5997.22 15843.18 00:19:45.461 ======================================================== 00:19:45.461 Total : 15949.12 62.30 8030.84 5997.22 15843.18 00:19:45.461 00:19:45.461 [2024-11-05 12:34:14.335363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:45.461 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:45.461 [2024-11-05 12:34:14.570491] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:50.722 [2024-11-05 12:34:19.643212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:50.722 Initializing NVMe Controllers 00:19:50.722 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:50.722 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:50.722 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:50.722 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:50.722 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:50.722 Initialization complete. Launching workers. 00:19:50.722 Starting thread on core 2 00:19:50.722 Starting thread on core 3 00:19:50.722 Starting thread on core 1 00:19:50.722 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:50.722 [2024-11-05 12:34:19.961350] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.956 [2024-11-05 12:34:23.592128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.956 Initializing NVMe Controllers 00:19:54.956 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.956 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.956 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:54.956 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:54.956 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:54.956 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:54.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:54.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:54.956 Initialization complete. Launching workers. 00:19:54.956 Starting thread on core 1 with urgent priority queue 00:19:54.956 Starting thread on core 2 with urgent priority queue 00:19:54.956 Starting thread on core 3 with urgent priority queue 00:19:54.956 Starting thread on core 0 with urgent priority queue 00:19:54.956 SPDK bdev Controller (SPDK1 ) core 0: 4259.33 IO/s 23.48 secs/100000 ios 00:19:54.956 SPDK bdev Controller (SPDK1 ) core 1: 4345.00 IO/s 23.01 secs/100000 ios 00:19:54.956 SPDK bdev Controller (SPDK1 ) core 2: 4498.00 IO/s 22.23 secs/100000 ios 00:19:54.956 SPDK bdev Controller (SPDK1 ) core 3: 4068.33 IO/s 24.58 secs/100000 ios 00:19:54.956 ======================================================== 00:19:54.956 00:19:54.956 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:54.956 [2024-11-05 12:34:23.907322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.956 Initializing NVMe Controllers 00:19:54.956 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.956 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:54.956 Namespace ID: 1 size: 0GB 00:19:54.956 Initialization complete. 00:19:54.956 INFO: using host memory buffer for IO 00:19:54.956 Hello world! 00:19:54.956 [2024-11-05 12:34:23.940872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.956 12:34:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:55.213 [2024-11-05 12:34:24.240376] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:56.145 Initializing NVMe Controllers 00:19:56.145 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.145 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.145 Initialization complete. Launching workers. 00:19:56.145 submit (in ns) avg, min, max = 9800.1, 3503.3, 4021262.2 00:19:56.145 complete (in ns) avg, min, max = 24444.1, 2072.2, 4998387.8 00:19:56.145 00:19:56.145 Submit histogram 00:19:56.145 ================ 00:19:56.145 Range in us Cumulative Count 00:19:56.145 3.484 - 3.508: 0.0541% ( 7) 00:19:56.145 3.508 - 3.532: 0.5950% ( 70) 00:19:56.145 3.532 - 3.556: 2.3953% ( 233) 00:19:56.145 3.556 - 3.579: 6.3514% ( 512) 00:19:56.145 3.579 - 3.603: 14.2096% ( 1017) 00:19:56.145 3.603 - 3.627: 24.8416% ( 1376) 00:19:56.145 3.627 - 3.650: 34.7164% ( 1278) 00:19:56.145 3.650 - 3.674: 42.4587% ( 1002) 00:19:56.145 3.674 - 3.698: 48.6710% ( 804) 00:19:56.145 3.698 - 3.721: 54.4970% ( 754) 00:19:56.145 3.721 - 3.745: 58.9244% ( 573) 00:19:56.145 3.745 - 3.769: 62.9733% ( 524) 00:19:56.145 3.769 - 3.793: 66.1335% ( 409) 00:19:56.145 3.793 - 3.816: 69.2551% ( 404) 00:19:56.145 3.816 - 3.840: 72.6008% ( 433) 00:19:56.145 3.840 - 3.864: 77.2833% ( 606) 00:19:56.145 3.864 - 3.887: 81.1930% ( 506) 00:19:56.145 3.887 - 3.911: 84.4074% ( 416) 00:19:56.145 3.911 - 3.935: 86.8954% ( 322) 00:19:56.145 3.935 - 3.959: 88.5566% ( 215) 00:19:56.145 3.959 - 3.982: 90.1870% ( 211) 00:19:56.145 3.982 - 4.006: 91.7092% ( 197) 00:19:56.145 4.006 - 4.030: 92.8450% ( 147) 00:19:56.145 4.030 - 4.053: 93.7027% ( 111) 00:19:56.145 4.053 - 4.077: 94.5449% ( 109) 00:19:56.145 4.077 - 4.101: 95.1862% ( 83) 00:19:56.145 4.101 - 4.124: 95.5880% ( 52) 00:19:56.145 4.124 - 4.148: 96.0593% ( 61) 00:19:56.145 4.148 - 4.172: 96.3298% ( 35) 00:19:56.145 4.172 - 4.196: 96.5770% ( 32) 00:19:56.145 4.196 - 4.219: 96.7393% ( 21) 00:19:56.145 4.219 - 4.243: 96.8475% ( 14) 00:19:56.145 4.243 - 4.267: 96.9479% ( 13) 00:19:56.145 4.267 - 4.290: 97.0561% ( 14) 00:19:56.145 4.290 - 4.314: 97.1102% ( 7) 00:19:56.145 4.314 - 4.338: 97.1952% ( 11) 00:19:56.145 4.338 - 4.361: 97.2493% ( 7) 00:19:56.145 4.361 - 4.385: 97.3111% ( 8) 00:19:56.145 4.385 - 4.409: 97.3420% ( 4) 00:19:56.145 4.409 - 4.433: 97.3806% ( 5) 00:19:56.145 4.433 - 4.456: 97.3961% ( 2) 00:19:56.145 4.456 - 4.480: 97.4115% ( 2) 00:19:56.145 4.504 - 4.527: 97.4193% ( 1) 00:19:56.145 4.527 - 4.551: 97.4424% ( 3) 00:19:56.145 4.551 - 4.575: 97.4502% ( 1) 00:19:56.145 4.575 - 4.599: 97.5042% ( 7) 00:19:56.146 4.599 - 4.622: 97.5352% ( 4) 00:19:56.146 4.622 - 4.646: 97.5506% ( 2) 00:19:56.146 4.646 - 4.670: 97.5892% ( 5) 00:19:56.146 4.670 - 4.693: 97.6433% ( 7) 00:19:56.146 4.693 - 4.717: 97.6742% ( 4) 00:19:56.146 4.717 - 4.741: 97.6897% ( 2) 00:19:56.146 4.741 - 4.764: 97.7283% ( 5) 00:19:56.146 4.764 - 4.788: 97.7438% ( 2) 00:19:56.146 4.788 - 4.812: 97.7670% ( 3) 00:19:56.146 4.812 - 4.836: 97.7901% ( 3) 00:19:56.146 4.836 - 4.859: 97.8365% ( 6) 00:19:56.146 4.859 - 4.883: 97.9215% ( 11) 00:19:56.146 4.883 - 4.907: 97.9601% ( 5) 00:19:56.146 4.907 - 4.930: 98.0065% ( 6) 00:19:56.146 4.930 - 4.954: 98.0529% ( 6) 00:19:56.146 4.954 - 4.978: 98.0683% ( 2) 00:19:56.146 4.978 - 5.001: 98.1069% ( 5) 00:19:56.146 5.001 - 5.025: 98.1378% ( 4) 00:19:56.146 5.025 - 5.049: 98.1456% ( 1) 00:19:56.146 5.049 - 5.073: 98.1688% ( 3) 00:19:56.146 5.073 - 5.096: 98.1765% ( 1) 00:19:56.146 5.096 - 5.120: 98.1842% ( 1) 00:19:56.146 5.120 - 5.144: 98.1919% ( 1) 00:19:56.146 5.167 - 5.191: 98.2151% ( 3) 00:19:56.146 5.191 - 5.215: 98.2306% ( 2) 00:19:56.146 5.239 - 5.262: 98.2383% ( 1) 00:19:56.146 5.262 - 5.286: 98.2460% ( 1) 00:19:56.146 5.286 - 5.310: 98.2615% ( 2) 00:19:56.146 5.310 - 5.333: 98.2692% ( 1) 00:19:56.146 5.452 - 5.476: 98.2769% ( 1) 00:19:56.146 5.926 - 5.950: 98.2847% ( 1) 00:19:56.146 6.116 - 6.163: 98.2924% ( 1) 00:19:56.146 6.163 - 6.210: 98.3001% ( 1) 00:19:56.146 6.258 - 6.305: 98.3078% ( 1) 00:19:56.146 6.400 - 6.447: 98.3156% ( 1) 00:19:56.146 6.637 - 6.684: 98.3233% ( 1) 00:19:56.146 6.684 - 6.732: 98.3310% ( 1) 00:19:56.146 6.732 - 6.779: 98.3387% ( 1) 00:19:56.146 6.779 - 6.827: 98.3465% ( 1) 00:19:56.146 6.874 - 6.921: 98.3542% ( 1) 00:19:56.146 6.921 - 6.969: 98.3619% ( 1) 00:19:56.146 6.969 - 7.016: 98.3696% ( 1) 00:19:56.146 7.111 - 7.159: 98.3774% ( 1) 00:19:56.146 7.206 - 7.253: 98.3928% ( 2) 00:19:56.146 7.301 - 7.348: 98.4006% ( 1) 00:19:56.146 7.396 - 7.443: 98.4160% ( 2) 00:19:56.146 7.443 - 7.490: 98.4392% ( 3) 00:19:56.146 7.490 - 7.538: 98.4546% ( 2) 00:19:56.146 7.633 - 7.680: 98.4701% ( 2) 00:19:56.146 7.680 - 7.727: 98.4856% ( 2) 00:19:56.146 7.727 - 7.775: 98.5010% ( 2) 00:19:56.146 7.917 - 7.964: 98.5087% ( 1) 00:19:56.146 7.964 - 8.012: 98.5242% ( 2) 00:19:56.146 8.012 - 8.059: 98.5396% ( 2) 00:19:56.146 8.059 - 8.107: 98.5551% ( 2) 00:19:56.146 8.107 - 8.154: 98.5705% ( 2) 00:19:56.146 8.154 - 8.201: 98.5783% ( 1) 00:19:56.146 8.201 - 8.249: 98.5860% ( 1) 00:19:56.146 8.249 - 8.296: 98.5937% ( 1) 00:19:56.146 8.296 - 8.344: 98.6015% ( 1) 00:19:56.146 8.391 - 8.439: 98.6092% ( 1) 00:19:56.146 8.439 - 8.486: 98.6169% ( 1) 00:19:56.146 8.533 - 8.581: 98.6246% ( 1) 00:19:56.146 8.865 - 8.913: 98.6324% ( 1) 00:19:56.146 9.102 - 9.150: 98.6401% ( 1) 00:19:56.146 9.150 - 9.197: 98.6478% ( 1) 00:19:56.146 9.292 - 9.339: 98.6555% ( 1) 00:19:56.146 9.339 - 9.387: 98.6633% ( 1) 00:19:56.146 9.387 - 9.434: 98.6710% ( 1) 00:19:56.146 9.576 - 9.624: 98.6787% ( 1) 00:19:56.146 9.766 - 9.813: 98.6864% ( 1) 00:19:56.146 9.908 - 9.956: 98.7019% ( 2) 00:19:56.146 10.098 - 10.145: 98.7096% ( 1) 00:19:56.146 10.145 - 10.193: 98.7174% ( 1) 00:19:56.146 10.382 - 10.430: 98.7251% ( 1) 00:19:56.146 11.425 - 11.473: 98.7405% ( 2) 00:19:56.146 11.520 - 11.567: 98.7483% ( 1) 00:19:56.146 11.662 - 11.710: 98.7560% ( 1) 00:19:56.146 11.899 - 11.947: 98.7714% ( 2) 00:19:56.146 12.041 - 12.089: 98.7792% ( 1) 00:19:56.146 12.136 - 12.231: 98.7946% ( 2) 00:19:56.146 12.705 - 12.800: 98.8023% ( 1) 00:19:56.146 12.895 - 12.990: 98.8101% ( 1) 00:19:56.146 12.990 - 13.084: 98.8178% ( 1) 00:19:56.146 13.464 - 13.559: 98.8333% ( 2) 00:19:56.146 14.033 - 14.127: 98.8487% ( 2) 00:19:56.146 14.222 - 14.317: 98.8564% ( 1) 00:19:56.146 14.507 - 14.601: 98.8642% ( 1) 00:19:56.146 14.696 - 14.791: 98.8719% ( 1) 00:19:56.146 15.455 - 15.550: 98.8796% ( 1) 00:19:56.146 17.161 - 17.256: 98.8951% ( 2) 00:19:56.146 17.256 - 17.351: 98.9337% ( 5) 00:19:56.146 17.351 - 17.446: 98.9723% ( 5) 00:19:56.146 17.446 - 17.541: 98.9955% ( 3) 00:19:56.146 17.541 - 17.636: 99.0264% ( 4) 00:19:56.146 17.636 - 17.730: 99.0419% ( 2) 00:19:56.146 17.730 - 17.825: 99.0960% ( 7) 00:19:56.146 17.825 - 17.920: 99.1732% ( 10) 00:19:56.146 17.920 - 18.015: 99.2428% ( 9) 00:19:56.146 18.015 - 18.110: 99.2814% ( 5) 00:19:56.146 18.110 - 18.204: 99.3432% ( 8) 00:19:56.146 18.204 - 18.299: 99.4282% ( 11) 00:19:56.146 18.299 - 18.394: 99.4900% ( 8) 00:19:56.146 18.394 - 18.489: 99.5441% ( 7) 00:19:56.146 18.489 - 18.584: 99.5905% ( 6) 00:19:56.146 18.584 - 18.679: 99.6214% ( 4) 00:19:56.146 18.679 - 18.773: 99.6368% ( 2) 00:19:56.146 18.773 - 18.868: 99.6755% ( 5) 00:19:56.146 18.868 - 18.963: 99.6909% ( 2) 00:19:56.146 18.963 - 19.058: 99.7373% ( 6) 00:19:56.146 19.247 - 19.342: 99.7527% ( 2) 00:19:56.146 19.342 - 19.437: 99.7605% ( 1) 00:19:56.146 19.532 - 19.627: 99.7759% ( 2) 00:19:56.146 19.911 - 20.006: 99.7837% ( 1) 00:19:56.146 20.196 - 20.290: 99.7914% ( 1) 00:19:56.146 20.670 - 20.764: 99.7991% ( 1) 00:19:56.146 20.764 - 20.859: 99.8068% ( 1) 00:19:56.146 20.859 - 20.954: 99.8146% ( 1) 00:19:56.146 21.428 - 21.523: 99.8223% ( 1) 00:19:56.146 23.040 - 23.135: 99.8300% ( 1) 00:19:56.146 26.548 - 26.738: 99.8377% ( 1) 00:19:56.146 27.117 - 27.307: 99.8455% ( 1) 00:19:56.146 30.151 - 30.341: 99.8532% ( 1) 00:19:56.146 3980.705 - 4004.978: 99.9691% ( 15) 00:19:56.146 4004.978 - 4029.250: 100.0000% ( 4) 00:19:56.146 00:19:56.146 Complete histogram 00:19:56.146 ================== 00:19:56.146 Range in us Cumulative Count 00:19:56.146 2.062 - 2.074: 0.0077% ( 1) 00:19:56.146 2.074 - 2.086: 14.1400% ( 1829) 00:19:56.146 2.086 - 2.098: 31.3476% ( 2227) 00:19:56.146 2.098 - 2.110: 33.4492% ( 272) 00:19:56.146 2.110 - 2.121: 51.3831% ( 2321) 00:19:56.146 2.121 - 2.133: 58.7854% ( 958) 00:19:56.146 2.133 - 2.145: 60.9411% ( 279) 00:19:56.146 2.145 - 2.157: 68.9074% ( 1031) 00:19:56.146 2.157 - 2.169: 72.6317% ( 482) 00:19:56.146 2.169 - 2.181: 74.2157% ( 205) 00:19:56.146 2.181 - 2.193: 79.4854% ( 682) 00:19:56.146 2.193 - 2.204: 81.9811% ( 323) 00:19:56.146 2.204 - 2.216: 82.8079% ( 107) 00:19:56.146 2.216 - 2.228: 85.5046% ( 349) 00:19:56.146 2.228 - 2.240: 88.2939% ( 361) 00:19:56.146 2.240 - 2.252: 90.0865% ( 232) 00:19:56.146 2.252 - 2.264: 92.4278% ( 303) 00:19:56.146 2.264 - 2.276: 93.4400% ( 131) 00:19:56.146 2.276 - 2.287: 93.7799% ( 44) 00:19:56.146 2.287 - 2.299: 94.1895% ( 53) 00:19:56.146 2.299 - 2.311: 94.6222% ( 56) 00:19:56.146 2.311 - 2.323: 95.2480% ( 81) 00:19:56.146 2.323 - 2.335: 95.4103% ( 21) 00:19:56.146 2.335 - 2.347: 95.4644% ( 7) 00:19:56.146 2.347 - 2.359: 95.5107% ( 6) 00:19:56.146 2.359 - 2.370: 95.6344% ( 16) 00:19:56.146 2.370 - 2.382: 95.8507% ( 28) 00:19:56.146 2.382 - 2.394: 96.3530% ( 65) 00:19:56.146 2.394 - 2.406: 96.8552% ( 65) 00:19:56.146 2.406 - 2.418: 97.1875% ( 43) 00:19:56.146 2.418 - 2.430: 97.3497% ( 21) 00:19:56.146 2.430 - 2.441: 97.5815% ( 30) 00:19:56.146 2.441 - 2.453: 97.7283% ( 19) 00:19:56.146 2.453 - 2.465: 97.8906% ( 21) 00:19:56.146 2.465 - 2.477: 98.0142% ( 16) 00:19:56.146 2.477 - 2.489: 98.0992% ( 11) 00:19:56.146 2.489 - 2.501: 98.1533% ( 7) 00:19:56.146 2.501 - 2.513: 98.2615% ( 14) 00:19:56.146 2.513 - 2.524: 98.3233% ( 8) 00:19:56.146 2.524 - 2.536: 98.3928% ( 9) 00:19:56.146 2.536 - 2.548: 98.4237% ( 4) 00:19:56.146 2.548 - 2.560: 98.4392% ( 2) 00:19:56.146 2.560 - 2.572: 98.4469% ( 1) 00:19:56.146 2.572 - 2.584: 98.4546% ( 1) 00:19:56.146 2.584 - 2.596: 98.4701% ( 2) 00:19:56.146 2.596 - 2.607: 98.4778% ( 1) 00:19:56.147 2.607 - 2.619: 98.4933% ( 2) 00:19:56.147 2.631 - 2.643: 98.5010% ( 1) 00:19:56.147 2.690 - 2.702: 98.5087% ( 1) 00:19:56.147 2.726 - 2.738: 98.5165% ( 1) 00:19:56.147 2.750 - 2.761: 98.5242% ( 1) 00:19:56.147 2.844 - 2.856: 98.5319% ( 1) 00:19:56.147 3.413 - 3.437: 98.5474% ( 2) 00:19:56.147 3.508 - 3.532: 98.5551% ( 1) 00:19:56.147 3.532 - 3.556: 98.5628% ( 1) 00:19:56.147 3.579 - 3.603: 98.5783% ( 2) 00:19:56.147 3.603 - 3.627: 98.5860% ( 1) 00:19:56.147 3.627 - 3.650: 98.5937% ( 1) 00:19:56.147 3.650 - 3.674: 98.6015% ( 1) 00:19:56.147 3.698 - 3.721: 98.6092% ( 1) 00:19:56.147 3.721 - 3.745: 98.6246% ( 2) 00:19:56.147 3.745 - 3.769: 98.6324% ( 1) 00:19:56.147 3.793 - 3.816: 98.6401% ( 1) 00:19:56.147 3.816 - 3.840: 98.6555% ( 2) 00:19:56.147 3.864 - 3.887: 98.6633% ( 1) 00:19:56.147 3.911 - 3.935: 98.6710% ( 1) 00:19:56.147 3.935 - 3.959: 98.6864% ( 2) 00:19:56.147 3.959 - 3.982: 98.7019% ( 2) 00:19:56.147 3.982 - 4.006: 98.7096% ( 1) 00:19:56.147 4.006 - 4.030: 98.7174% ( 1) 00:19:56.147 4.030 - 4.053: 98.7328% ( 2) 00:19:56.147 4.101 - 4.124: 98.7405% ( 1) 00:19:56.147 4.172 - 4.196: 98.7560% ( 2) 00:19:56.147 4.196 - 4.219: 98.7637% ( 1) 00:19:56.147 5.570 - 5.594: 98.7714% ( 1) 00:19:56.147 5.689 - 5.713: 98.7792% ( 1) 00:19:56.147 6.116 - 6.163: 98.7869% ( 1) 00:19:56.147 6.305 - 6.353: 98.8023% ( 2) 00:19:56.147 6.542 - 6.590: 98.8178% ( 2) 00:19:56.147 6.637 - 6.684: 98.8333% ( 2) 00:19:56.147 6.779 - 6.827: 9[2024-11-05 12:34:25.263524] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:56.147 8.8410% ( 1) 00:19:56.147 7.111 - 7.159: 98.8487% ( 1) 00:19:56.147 7.348 - 7.396: 98.8564% ( 1) 00:19:56.147 7.490 - 7.538: 98.8642% ( 1) 00:19:56.147 7.538 - 7.585: 98.8719% ( 1) 00:19:56.147 7.775 - 7.822: 98.8796% ( 1) 00:19:56.147 8.107 - 8.154: 98.8873% ( 1) 00:19:56.147 9.007 - 9.055: 98.8951% ( 1) 00:19:56.147 15.170 - 15.265: 98.9028% ( 1) 00:19:56.147 15.550 - 15.644: 98.9105% ( 1) 00:19:56.147 15.644 - 15.739: 98.9183% ( 1) 00:19:56.147 15.739 - 15.834: 98.9260% ( 1) 00:19:56.147 15.929 - 16.024: 98.9337% ( 1) 00:19:56.147 16.024 - 16.119: 98.9878% ( 7) 00:19:56.147 16.119 - 16.213: 99.0187% ( 4) 00:19:56.147 16.213 - 16.308: 99.0419% ( 3) 00:19:56.147 16.308 - 16.403: 99.0728% ( 4) 00:19:56.147 16.403 - 16.498: 99.0882% ( 2) 00:19:56.147 16.498 - 16.593: 99.1346% ( 6) 00:19:56.147 16.593 - 16.687: 99.1732% ( 5) 00:19:56.147 16.687 - 16.782: 99.2119% ( 5) 00:19:56.147 16.782 - 16.877: 99.2582% ( 6) 00:19:56.147 16.877 - 16.972: 99.3046% ( 6) 00:19:56.147 16.972 - 17.067: 99.3123% ( 1) 00:19:56.147 17.067 - 17.161: 99.3355% ( 3) 00:19:56.147 17.161 - 17.256: 99.3432% ( 1) 00:19:56.147 17.351 - 17.446: 99.3587% ( 2) 00:19:56.147 17.446 - 17.541: 99.3664% ( 1) 00:19:56.147 17.636 - 17.730: 99.3819% ( 2) 00:19:56.147 17.730 - 17.825: 99.3973% ( 2) 00:19:56.147 18.204 - 18.299: 99.4128% ( 2) 00:19:56.147 18.299 - 18.394: 99.4205% ( 1) 00:19:56.147 18.868 - 18.963: 99.4282% ( 1) 00:19:56.147 19.058 - 19.153: 99.4359% ( 1) 00:19:56.147 67.887 - 68.267: 99.4437% ( 1) 00:19:56.147 2184.533 - 2196.670: 99.4514% ( 1) 00:19:56.147 3980.705 - 4004.978: 99.8068% ( 46) 00:19:56.147 4004.978 - 4029.250: 99.9923% ( 24) 00:19:56.147 4975.881 - 5000.154: 100.0000% ( 1) 00:19:56.147 00:19:56.147 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:56.147 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:56.147 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:56.147 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:56.147 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:56.404 [ 00:19:56.404 { 00:19:56.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:56.404 "subtype": "Discovery", 00:19:56.404 "listen_addresses": [], 00:19:56.404 "allow_any_host": true, 00:19:56.404 "hosts": [] 00:19:56.404 }, 00:19:56.404 { 00:19:56.404 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:56.404 "subtype": "NVMe", 00:19:56.404 "listen_addresses": [ 00:19:56.404 { 00:19:56.404 "trtype": "VFIOUSER", 00:19:56.404 "adrfam": "IPv4", 00:19:56.404 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:56.404 "trsvcid": "0" 00:19:56.404 } 00:19:56.404 ], 00:19:56.404 "allow_any_host": true, 00:19:56.404 "hosts": [], 00:19:56.404 "serial_number": "SPDK1", 00:19:56.404 "model_number": "SPDK bdev Controller", 00:19:56.404 "max_namespaces": 32, 00:19:56.404 "min_cntlid": 1, 00:19:56.404 "max_cntlid": 65519, 00:19:56.404 "namespaces": [ 00:19:56.404 { 00:19:56.405 "nsid": 1, 00:19:56.405 "bdev_name": "Malloc1", 00:19:56.405 "name": "Malloc1", 00:19:56.405 "nguid": "A3342399AB7F45DFA338234649E5142D", 00:19:56.405 "uuid": "a3342399-ab7f-45df-a338-234649e5142d" 00:19:56.405 } 00:19:56.405 ] 00:19:56.405 }, 00:19:56.405 { 00:19:56.405 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:56.405 "subtype": "NVMe", 00:19:56.405 "listen_addresses": [ 00:19:56.405 { 00:19:56.405 "trtype": "VFIOUSER", 00:19:56.405 "adrfam": "IPv4", 00:19:56.405 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:56.405 "trsvcid": "0" 00:19:56.405 } 00:19:56.405 ], 00:19:56.405 "allow_any_host": true, 00:19:56.405 "hosts": [], 00:19:56.405 "serial_number": "SPDK2", 00:19:56.405 "model_number": "SPDK bdev Controller", 00:19:56.405 "max_namespaces": 32, 00:19:56.405 "min_cntlid": 1, 00:19:56.405 "max_cntlid": 65519, 00:19:56.405 "namespaces": [ 00:19:56.405 { 00:19:56.405 "nsid": 1, 00:19:56.405 "bdev_name": "Malloc2", 00:19:56.405 "name": "Malloc2", 00:19:56.405 "nguid": "8312BD9FC273464595CAB7A035505BD7", 00:19:56.405 "uuid": "8312bd9f-c273-4645-95ca-b7a035505bd7" 00:19:56.405 } 00:19:56.405 ] 00:19:56.405 } 00:19:56.405 ] 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=644657 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:56.405 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:56.662 [2024-11-05 12:34:25.810343] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:56.919 Malloc3 00:19:56.919 12:34:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:57.177 [2024-11-05 12:34:26.211121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:57.177 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:57.177 Asynchronous Event Request test 00:19:57.177 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:57.177 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:57.177 Registering asynchronous event callbacks... 00:19:57.177 Starting namespace attribute notice tests for all controllers... 00:19:57.177 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:57.177 aer_cb - Changed Namespace 00:19:57.177 Cleaning up... 00:19:57.435 [ 00:19:57.435 { 00:19:57.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:57.435 "subtype": "Discovery", 00:19:57.435 "listen_addresses": [], 00:19:57.435 "allow_any_host": true, 00:19:57.435 "hosts": [] 00:19:57.435 }, 00:19:57.435 { 00:19:57.435 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:57.435 "subtype": "NVMe", 00:19:57.435 "listen_addresses": [ 00:19:57.435 { 00:19:57.435 "trtype": "VFIOUSER", 00:19:57.435 "adrfam": "IPv4", 00:19:57.435 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:57.435 "trsvcid": "0" 00:19:57.435 } 00:19:57.435 ], 00:19:57.435 "allow_any_host": true, 00:19:57.435 "hosts": [], 00:19:57.435 "serial_number": "SPDK1", 00:19:57.435 "model_number": "SPDK bdev Controller", 00:19:57.435 "max_namespaces": 32, 00:19:57.435 "min_cntlid": 1, 00:19:57.435 "max_cntlid": 65519, 00:19:57.435 "namespaces": [ 00:19:57.435 { 00:19:57.435 "nsid": 1, 00:19:57.435 "bdev_name": "Malloc1", 00:19:57.435 "name": "Malloc1", 00:19:57.435 "nguid": "A3342399AB7F45DFA338234649E5142D", 00:19:57.435 "uuid": "a3342399-ab7f-45df-a338-234649e5142d" 00:19:57.435 }, 00:19:57.435 { 00:19:57.435 "nsid": 2, 00:19:57.435 "bdev_name": "Malloc3", 00:19:57.435 "name": "Malloc3", 00:19:57.435 "nguid": "D5D90F3E879D49639524CD6DC3F06E0D", 00:19:57.435 "uuid": "d5d90f3e-879d-4963-9524-cd6dc3f06e0d" 00:19:57.435 } 00:19:57.435 ] 00:19:57.435 }, 00:19:57.435 { 00:19:57.435 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:57.435 "subtype": "NVMe", 00:19:57.435 "listen_addresses": [ 00:19:57.435 { 00:19:57.435 "trtype": "VFIOUSER", 00:19:57.435 "adrfam": "IPv4", 00:19:57.435 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:57.435 "trsvcid": "0" 00:19:57.435 } 00:19:57.435 ], 00:19:57.435 "allow_any_host": true, 00:19:57.435 "hosts": [], 00:19:57.435 "serial_number": "SPDK2", 00:19:57.435 "model_number": "SPDK bdev Controller", 00:19:57.435 "max_namespaces": 32, 00:19:57.435 "min_cntlid": 1, 00:19:57.435 "max_cntlid": 65519, 00:19:57.435 "namespaces": [ 00:19:57.435 { 00:19:57.435 "nsid": 1, 00:19:57.435 "bdev_name": "Malloc2", 00:19:57.435 "name": "Malloc2", 00:19:57.436 "nguid": "8312BD9FC273464595CAB7A035505BD7", 00:19:57.436 "uuid": "8312bd9f-c273-4645-95ca-b7a035505bd7" 00:19:57.436 } 00:19:57.436 ] 00:19:57.436 } 00:19:57.436 ] 00:19:57.436 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 644657 00:19:57.436 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:57.436 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:57.436 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:57.436 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:57.436 [2024-11-05 12:34:26.520931] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:19:57.436 [2024-11-05 12:34:26.520975] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644794 ] 00:19:57.436 [2024-11-05 12:34:26.578674] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:57.436 [2024-11-05 12:34:26.583019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:57.436 [2024-11-05 12:34:26.583051] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe6083f4000 00:19:57.436 [2024-11-05 12:34:26.584018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.585025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.586033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.587040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.588049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.589051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.590062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.591075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:57.436 [2024-11-05 12:34:26.592079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:57.436 [2024-11-05 12:34:26.592102] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe6070ec000 00:19:57.436 [2024-11-05 12:34:26.593273] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:57.436 [2024-11-05 12:34:26.605542] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:57.436 [2024-11-05 12:34:26.605579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:57.436 [2024-11-05 12:34:26.613696] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:57.436 [2024-11-05 12:34:26.613751] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:57.436 [2024-11-05 12:34:26.613855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:57.436 [2024-11-05 12:34:26.613889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:57.436 [2024-11-05 12:34:26.613901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:57.436 [2024-11-05 12:34:26.614706] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:57.436 [2024-11-05 12:34:26.614726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:57.436 [2024-11-05 12:34:26.614740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:57.436 [2024-11-05 12:34:26.615710] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:57.436 [2024-11-05 12:34:26.615730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:57.436 [2024-11-05 12:34:26.615743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:57.436 [2024-11-05 12:34:26.616723] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:57.436 [2024-11-05 12:34:26.616742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:57.436 [2024-11-05 12:34:26.617724] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:57.436 [2024-11-05 12:34:26.617744] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:57.436 [2024-11-05 12:34:26.617753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:57.436 [2024-11-05 12:34:26.617764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:57.436 [2024-11-05 12:34:26.617874] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:57.436 [2024-11-05 12:34:26.617885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:57.436 [2024-11-05 12:34:26.617894] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:57.436 [2024-11-05 12:34:26.618733] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:57.436 [2024-11-05 12:34:26.619733] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:57.436 [2024-11-05 12:34:26.620746] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:57.436 [2024-11-05 12:34:26.621740] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:57.436 [2024-11-05 12:34:26.621812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:57.436 [2024-11-05 12:34:26.622753] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:57.436 [2024-11-05 12:34:26.622772] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:57.436 [2024-11-05 12:34:26.622781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:57.436 [2024-11-05 12:34:26.622805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:57.436 [2024-11-05 12:34:26.622819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:57.436 [2024-11-05 12:34:26.622857] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:57.436 [2024-11-05 12:34:26.622874] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.436 [2024-11-05 12:34:26.622881] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.436 [2024-11-05 12:34:26.622899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.436 [2024-11-05 12:34:26.627875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:57.436 [2024-11-05 12:34:26.627899] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:57.436 [2024-11-05 12:34:26.627909] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:57.436 [2024-11-05 12:34:26.627917] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:57.436 [2024-11-05 12:34:26.627930] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:57.436 [2024-11-05 12:34:26.627939] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:57.436 [2024-11-05 12:34:26.627951] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:57.436 [2024-11-05 12:34:26.627960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:57.436 [2024-11-05 12:34:26.627973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:57.436 [2024-11-05 12:34:26.627990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:57.436 [2024-11-05 12:34:26.635873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:57.436 [2024-11-05 12:34:26.635903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.436 [2024-11-05 12:34:26.635919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.436 [2024-11-05 12:34:26.635931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.436 [2024-11-05 12:34:26.635944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.436 [2024-11-05 12:34:26.635953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:57.436 [2024-11-05 12:34:26.635964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:57.436 [2024-11-05 12:34:26.635978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:57.436 [2024-11-05 12:34:26.643873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:57.437 [2024-11-05 12:34:26.643896] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:57.437 [2024-11-05 12:34:26.643907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.643919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.643930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.643944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:57.437 [2024-11-05 12:34:26.651871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:57.437 [2024-11-05 12:34:26.651947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.651964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.651979] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:57.437 [2024-11-05 12:34:26.651988] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:57.437 [2024-11-05 12:34:26.651998] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.437 [2024-11-05 12:34:26.652008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:57.437 [2024-11-05 12:34:26.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:57.437 [2024-11-05 12:34:26.659896] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:57.437 [2024-11-05 12:34:26.659916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.659931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.659944] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:57.437 [2024-11-05 12:34:26.659952] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.437 [2024-11-05 12:34:26.659958] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.437 [2024-11-05 12:34:26.659968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.437 [2024-11-05 12:34:26.667872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:57.437 [2024-11-05 12:34:26.667902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.667919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:57.437 [2024-11-05 12:34:26.667933] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:57.437 [2024-11-05 12:34:26.667941] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.437 [2024-11-05 12:34:26.667948] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.437 [2024-11-05 12:34:26.667957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.695 [2024-11-05 12:34:26.675872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:57.695 [2024-11-05 12:34:26.675894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675935] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675962] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:57.695 [2024-11-05 12:34:26.675973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:57.695 [2024-11-05 12:34:26.675983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:57.695 [2024-11-05 12:34:26.676010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:57.695 [2024-11-05 12:34:26.683871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:57.695 [2024-11-05 12:34:26.683899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:57.695 [2024-11-05 12:34:26.691881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:57.695 [2024-11-05 12:34:26.691908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:57.695 [2024-11-05 12:34:26.699870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:57.695 [2024-11-05 12:34:26.699896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:57.695 [2024-11-05 12:34:26.707872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:57.695 [2024-11-05 12:34:26.707904] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:57.695 [2024-11-05 12:34:26.707916] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:57.695 [2024-11-05 12:34:26.707922] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:57.695 [2024-11-05 12:34:26.707928] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:57.695 [2024-11-05 12:34:26.707934] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:57.695 [2024-11-05 12:34:26.707944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:57.695 [2024-11-05 12:34:26.707956] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:57.696 [2024-11-05 12:34:26.707964] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:57.696 [2024-11-05 12:34:26.707970] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.696 [2024-11-05 12:34:26.707979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:57.696 [2024-11-05 12:34:26.707991] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:57.696 [2024-11-05 12:34:26.707999] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:57.696 [2024-11-05 12:34:26.708005] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.696 [2024-11-05 12:34:26.708014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:57.696 [2024-11-05 12:34:26.708030] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:57.696 [2024-11-05 12:34:26.708040] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:57.696 [2024-11-05 12:34:26.708046] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:57.696 [2024-11-05 12:34:26.708055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:57.696 [2024-11-05 12:34:26.715869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:57.696 [2024-11-05 12:34:26.715902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:57.696 [2024-11-05 12:34:26.715922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:57.696 [2024-11-05 12:34:26.715936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:57.696 ===================================================== 00:19:57.696 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:57.696 ===================================================== 00:19:57.696 Controller Capabilities/Features 00:19:57.696 ================================ 00:19:57.696 Vendor ID: 4e58 00:19:57.696 Subsystem Vendor ID: 4e58 00:19:57.696 Serial Number: SPDK2 00:19:57.696 Model Number: SPDK bdev Controller 00:19:57.696 Firmware Version: 25.01 00:19:57.696 Recommended Arb Burst: 6 00:19:57.696 IEEE OUI Identifier: 8d 6b 50 00:19:57.696 Multi-path I/O 00:19:57.696 May have multiple subsystem ports: Yes 00:19:57.696 May have multiple controllers: Yes 00:19:57.696 Associated with SR-IOV VF: No 00:19:57.696 Max Data Transfer Size: 131072 00:19:57.696 Max Number of Namespaces: 32 00:19:57.696 Max Number of I/O Queues: 127 00:19:57.696 NVMe Specification Version (VS): 1.3 00:19:57.696 NVMe Specification Version (Identify): 1.3 00:19:57.696 Maximum Queue Entries: 256 00:19:57.696 Contiguous Queues Required: Yes 00:19:57.696 Arbitration Mechanisms Supported 00:19:57.696 Weighted Round Robin: Not Supported 00:19:57.696 Vendor Specific: Not Supported 00:19:57.696 Reset Timeout: 15000 ms 00:19:57.696 Doorbell Stride: 4 bytes 00:19:57.696 NVM Subsystem Reset: Not Supported 00:19:57.696 Command Sets Supported 00:19:57.696 NVM Command Set: Supported 00:19:57.696 Boot Partition: Not Supported 00:19:57.696 Memory Page Size Minimum: 4096 bytes 00:19:57.696 Memory Page Size Maximum: 4096 bytes 00:19:57.696 Persistent Memory Region: Not Supported 00:19:57.696 Optional Asynchronous Events Supported 00:19:57.696 Namespace Attribute Notices: Supported 00:19:57.696 Firmware Activation Notices: Not Supported 00:19:57.696 ANA Change Notices: Not Supported 00:19:57.696 PLE Aggregate Log Change Notices: Not Supported 00:19:57.696 LBA Status Info Alert Notices: Not Supported 00:19:57.696 EGE Aggregate Log Change Notices: Not Supported 00:19:57.696 Normal NVM Subsystem Shutdown event: Not Supported 00:19:57.696 Zone Descriptor Change Notices: Not Supported 00:19:57.696 Discovery Log Change Notices: Not Supported 00:19:57.696 Controller Attributes 00:19:57.696 128-bit Host Identifier: Supported 00:19:57.696 Non-Operational Permissive Mode: Not Supported 00:19:57.696 NVM Sets: Not Supported 00:19:57.696 Read Recovery Levels: Not Supported 00:19:57.696 Endurance Groups: Not Supported 00:19:57.696 Predictable Latency Mode: Not Supported 00:19:57.696 Traffic Based Keep ALive: Not Supported 00:19:57.696 Namespace Granularity: Not Supported 00:19:57.696 SQ Associations: Not Supported 00:19:57.696 UUID List: Not Supported 00:19:57.696 Multi-Domain Subsystem: Not Supported 00:19:57.696 Fixed Capacity Management: Not Supported 00:19:57.696 Variable Capacity Management: Not Supported 00:19:57.696 Delete Endurance Group: Not Supported 00:19:57.696 Delete NVM Set: Not Supported 00:19:57.696 Extended LBA Formats Supported: Not Supported 00:19:57.696 Flexible Data Placement Supported: Not Supported 00:19:57.696 00:19:57.696 Controller Memory Buffer Support 00:19:57.696 ================================ 00:19:57.696 Supported: No 00:19:57.696 00:19:57.696 Persistent Memory Region Support 00:19:57.696 ================================ 00:19:57.696 Supported: No 00:19:57.696 00:19:57.696 Admin Command Set Attributes 00:19:57.696 ============================ 00:19:57.696 Security Send/Receive: Not Supported 00:19:57.696 Format NVM: Not Supported 00:19:57.696 Firmware Activate/Download: Not Supported 00:19:57.696 Namespace Management: Not Supported 00:19:57.696 Device Self-Test: Not Supported 00:19:57.696 Directives: Not Supported 00:19:57.696 NVMe-MI: Not Supported 00:19:57.696 Virtualization Management: Not Supported 00:19:57.696 Doorbell Buffer Config: Not Supported 00:19:57.696 Get LBA Status Capability: Not Supported 00:19:57.696 Command & Feature Lockdown Capability: Not Supported 00:19:57.696 Abort Command Limit: 4 00:19:57.696 Async Event Request Limit: 4 00:19:57.696 Number of Firmware Slots: N/A 00:19:57.696 Firmware Slot 1 Read-Only: N/A 00:19:57.696 Firmware Activation Without Reset: N/A 00:19:57.696 Multiple Update Detection Support: N/A 00:19:57.696 Firmware Update Granularity: No Information Provided 00:19:57.696 Per-Namespace SMART Log: No 00:19:57.696 Asymmetric Namespace Access Log Page: Not Supported 00:19:57.696 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:57.696 Command Effects Log Page: Supported 00:19:57.696 Get Log Page Extended Data: Supported 00:19:57.697 Telemetry Log Pages: Not Supported 00:19:57.697 Persistent Event Log Pages: Not Supported 00:19:57.697 Supported Log Pages Log Page: May Support 00:19:57.697 Commands Supported & Effects Log Page: Not Supported 00:19:57.697 Feature Identifiers & Effects Log Page:May Support 00:19:57.697 NVMe-MI Commands & Effects Log Page: May Support 00:19:57.697 Data Area 4 for Telemetry Log: Not Supported 00:19:57.697 Error Log Page Entries Supported: 128 00:19:57.697 Keep Alive: Supported 00:19:57.697 Keep Alive Granularity: 10000 ms 00:19:57.697 00:19:57.697 NVM Command Set Attributes 00:19:57.697 ========================== 00:19:57.697 Submission Queue Entry Size 00:19:57.697 Max: 64 00:19:57.697 Min: 64 00:19:57.697 Completion Queue Entry Size 00:19:57.697 Max: 16 00:19:57.697 Min: 16 00:19:57.697 Number of Namespaces: 32 00:19:57.697 Compare Command: Supported 00:19:57.697 Write Uncorrectable Command: Not Supported 00:19:57.697 Dataset Management Command: Supported 00:19:57.697 Write Zeroes Command: Supported 00:19:57.697 Set Features Save Field: Not Supported 00:19:57.697 Reservations: Not Supported 00:19:57.697 Timestamp: Not Supported 00:19:57.697 Copy: Supported 00:19:57.697 Volatile Write Cache: Present 00:19:57.697 Atomic Write Unit (Normal): 1 00:19:57.697 Atomic Write Unit (PFail): 1 00:19:57.697 Atomic Compare & Write Unit: 1 00:19:57.697 Fused Compare & Write: Supported 00:19:57.697 Scatter-Gather List 00:19:57.697 SGL Command Set: Supported (Dword aligned) 00:19:57.697 SGL Keyed: Not Supported 00:19:57.697 SGL Bit Bucket Descriptor: Not Supported 00:19:57.697 SGL Metadata Pointer: Not Supported 00:19:57.697 Oversized SGL: Not Supported 00:19:57.697 SGL Metadata Address: Not Supported 00:19:57.697 SGL Offset: Not Supported 00:19:57.697 Transport SGL Data Block: Not Supported 00:19:57.697 Replay Protected Memory Block: Not Supported 00:19:57.697 00:19:57.697 Firmware Slot Information 00:19:57.697 ========================= 00:19:57.697 Active slot: 1 00:19:57.697 Slot 1 Firmware Revision: 25.01 00:19:57.697 00:19:57.697 00:19:57.697 Commands Supported and Effects 00:19:57.697 ============================== 00:19:57.697 Admin Commands 00:19:57.697 -------------- 00:19:57.697 Get Log Page (02h): Supported 00:19:57.697 Identify (06h): Supported 00:19:57.697 Abort (08h): Supported 00:19:57.697 Set Features (09h): Supported 00:19:57.697 Get Features (0Ah): Supported 00:19:57.697 Asynchronous Event Request (0Ch): Supported 00:19:57.697 Keep Alive (18h): Supported 00:19:57.697 I/O Commands 00:19:57.697 ------------ 00:19:57.697 Flush (00h): Supported LBA-Change 00:19:57.697 Write (01h): Supported LBA-Change 00:19:57.697 Read (02h): Supported 00:19:57.697 Compare (05h): Supported 00:19:57.697 Write Zeroes (08h): Supported LBA-Change 00:19:57.697 Dataset Management (09h): Supported LBA-Change 00:19:57.697 Copy (19h): Supported LBA-Change 00:19:57.697 00:19:57.697 Error Log 00:19:57.697 ========= 00:19:57.697 00:19:57.697 Arbitration 00:19:57.697 =========== 00:19:57.697 Arbitration Burst: 1 00:19:57.697 00:19:57.697 Power Management 00:19:57.697 ================ 00:19:57.697 Number of Power States: 1 00:19:57.697 Current Power State: Power State #0 00:19:57.697 Power State #0: 00:19:57.697 Max Power: 0.00 W 00:19:57.697 Non-Operational State: Operational 00:19:57.697 Entry Latency: Not Reported 00:19:57.697 Exit Latency: Not Reported 00:19:57.697 Relative Read Throughput: 0 00:19:57.697 Relative Read Latency: 0 00:19:57.697 Relative Write Throughput: 0 00:19:57.697 Relative Write Latency: 0 00:19:57.697 Idle Power: Not Reported 00:19:57.697 Active Power: Not Reported 00:19:57.697 Non-Operational Permissive Mode: Not Supported 00:19:57.697 00:19:57.697 Health Information 00:19:57.697 ================== 00:19:57.697 Critical Warnings: 00:19:57.697 Available Spare Space: OK 00:19:57.697 Temperature: OK 00:19:57.697 Device Reliability: OK 00:19:57.697 Read Only: No 00:19:57.697 Volatile Memory Backup: OK 00:19:57.697 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:57.697 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:57.697 Available Spare: 0% 00:19:57.697 Available Sp[2024-11-05 12:34:26.716068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:57.697 [2024-11-05 12:34:26.723869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:57.697 [2024-11-05 12:34:26.723923] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:57.697 [2024-11-05 12:34:26.723942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.697 [2024-11-05 12:34:26.723954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.697 [2024-11-05 12:34:26.723964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.697 [2024-11-05 12:34:26.723974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.697 [2024-11-05 12:34:26.724063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:57.697 [2024-11-05 12:34:26.724084] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:57.697 [2024-11-05 12:34:26.725069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:57.697 [2024-11-05 12:34:26.725141] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:57.697 [2024-11-05 12:34:26.725169] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:57.697 [2024-11-05 12:34:26.726073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:57.697 [2024-11-05 12:34:26.726098] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:57.698 [2024-11-05 12:34:26.726152] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:57.698 [2024-11-05 12:34:26.728872] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:57.698 are Threshold: 0% 00:19:57.698 Life Percentage Used: 0% 00:19:57.698 Data Units Read: 0 00:19:57.698 Data Units Written: 0 00:19:57.698 Host Read Commands: 0 00:19:57.698 Host Write Commands: 0 00:19:57.698 Controller Busy Time: 0 minutes 00:19:57.698 Power Cycles: 0 00:19:57.698 Power On Hours: 0 hours 00:19:57.698 Unsafe Shutdowns: 0 00:19:57.698 Unrecoverable Media Errors: 0 00:19:57.698 Lifetime Error Log Entries: 0 00:19:57.698 Warning Temperature Time: 0 minutes 00:19:57.698 Critical Temperature Time: 0 minutes 00:19:57.698 00:19:57.698 Number of Queues 00:19:57.698 ================ 00:19:57.698 Number of I/O Submission Queues: 127 00:19:57.698 Number of I/O Completion Queues: 127 00:19:57.698 00:19:57.698 Active Namespaces 00:19:57.698 ================= 00:19:57.698 Namespace ID:1 00:19:57.698 Error Recovery Timeout: Unlimited 00:19:57.698 Command Set Identifier: NVM (00h) 00:19:57.698 Deallocate: Supported 00:19:57.698 Deallocated/Unwritten Error: Not Supported 00:19:57.698 Deallocated Read Value: Unknown 00:19:57.698 Deallocate in Write Zeroes: Not Supported 00:19:57.698 Deallocated Guard Field: 0xFFFF 00:19:57.698 Flush: Supported 00:19:57.698 Reservation: Supported 00:19:57.698 Namespace Sharing Capabilities: Multiple Controllers 00:19:57.698 Size (in LBAs): 131072 (0GiB) 00:19:57.698 Capacity (in LBAs): 131072 (0GiB) 00:19:57.698 Utilization (in LBAs): 131072 (0GiB) 00:19:57.698 NGUID: 8312BD9FC273464595CAB7A035505BD7 00:19:57.698 UUID: 8312bd9f-c273-4645-95ca-b7a035505bd7 00:19:57.698 Thin Provisioning: Not Supported 00:19:57.698 Per-NS Atomic Units: Yes 00:19:57.698 Atomic Boundary Size (Normal): 0 00:19:57.698 Atomic Boundary Size (PFail): 0 00:19:57.698 Atomic Boundary Offset: 0 00:19:57.698 Maximum Single Source Range Length: 65535 00:19:57.698 Maximum Copy Length: 65535 00:19:57.698 Maximum Source Range Count: 1 00:19:57.698 NGUID/EUI64 Never Reused: No 00:19:57.698 Namespace Write Protected: No 00:19:57.698 Number of LBA Formats: 1 00:19:57.698 Current LBA Format: LBA Format #00 00:19:57.698 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:57.698 00:19:57.698 12:34:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:57.956 [2024-11-05 12:34:26.977635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:03.214 Initializing NVMe Controllers 00:20:03.214 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:03.214 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:03.214 Initialization complete. Launching workers. 00:20:03.214 ======================================================== 00:20:03.214 Latency(us) 00:20:03.214 Device Information : IOPS MiB/s Average min max 00:20:03.214 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33704.53 131.66 3796.93 1181.97 7693.89 00:20:03.214 ======================================================== 00:20:03.214 Total : 33704.53 131.66 3796.93 1181.97 7693.89 00:20:03.214 00:20:03.214 [2024-11-05 12:34:32.080260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:03.214 12:34:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:03.214 [2024-11-05 12:34:32.331928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:08.474 Initializing NVMe Controllers 00:20:08.474 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:08.474 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:08.474 Initialization complete. Launching workers. 00:20:08.474 ======================================================== 00:20:08.474 Latency(us) 00:20:08.474 Device Information : IOPS MiB/s Average min max 00:20:08.474 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30857.19 120.54 4147.33 1209.54 7484.41 00:20:08.474 ======================================================== 00:20:08.474 Total : 30857.19 120.54 4147.33 1209.54 7484.41 00:20:08.474 00:20:08.474 [2024-11-05 12:34:37.349822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:08.474 12:34:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:08.474 [2024-11-05 12:34:37.580675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:13.734 [2024-11-05 12:34:42.717026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:13.734 Initializing NVMe Controllers 00:20:13.734 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:13.734 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:13.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:13.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:13.734 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:13.734 Initialization complete. Launching workers. 00:20:13.734 Starting thread on core 2 00:20:13.734 Starting thread on core 3 00:20:13.734 Starting thread on core 1 00:20:13.734 12:34:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:13.993 [2024-11-05 12:34:43.040910] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:17.275 [2024-11-05 12:34:46.111301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:17.275 Initializing NVMe Controllers 00:20:17.275 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:17.275 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:17.275 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:17.275 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:17.275 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:17.275 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:17.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:17.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:17.275 Initialization complete. Launching workers. 00:20:17.275 Starting thread on core 1 with urgent priority queue 00:20:17.275 Starting thread on core 2 with urgent priority queue 00:20:17.275 Starting thread on core 3 with urgent priority queue 00:20:17.275 Starting thread on core 0 with urgent priority queue 00:20:17.275 SPDK bdev Controller (SPDK2 ) core 0: 5337.67 IO/s 18.73 secs/100000 ios 00:20:17.275 SPDK bdev Controller (SPDK2 ) core 1: 6151.33 IO/s 16.26 secs/100000 ios 00:20:17.275 SPDK bdev Controller (SPDK2 ) core 2: 5546.33 IO/s 18.03 secs/100000 ios 00:20:17.275 SPDK bdev Controller (SPDK2 ) core 3: 6248.33 IO/s 16.00 secs/100000 ios 00:20:17.275 ======================================================== 00:20:17.275 00:20:17.275 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:17.275 [2024-11-05 12:34:46.426898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:17.275 Initializing NVMe Controllers 00:20:17.275 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:17.275 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:17.275 Namespace ID: 1 size: 0GB 00:20:17.275 Initialization complete. 00:20:17.275 INFO: using host memory buffer for IO 00:20:17.275 Hello world! 00:20:17.275 [2024-11-05 12:34:46.439061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:17.275 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:17.532 [2024-11-05 12:34:46.742617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:18.904 Initializing NVMe Controllers 00:20:18.904 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:18.904 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:18.904 Initialization complete. Launching workers. 00:20:18.904 submit (in ns) avg, min, max = 6924.0, 3481.1, 4015747.8 00:20:18.904 complete (in ns) avg, min, max = 26260.3, 2050.0, 4016564.4 00:20:18.904 00:20:18.904 Submit histogram 00:20:18.904 ================ 00:20:18.904 Range in us Cumulative Count 00:20:18.904 3.461 - 3.484: 0.0226% ( 3) 00:20:18.904 3.484 - 3.508: 0.4446% ( 56) 00:20:18.904 3.508 - 3.532: 1.3263% ( 117) 00:20:18.904 3.532 - 3.556: 4.0015% ( 355) 00:20:18.904 3.556 - 3.579: 8.3723% ( 580) 00:20:18.904 3.579 - 3.603: 16.4280% ( 1069) 00:20:18.904 3.603 - 3.627: 24.4009% ( 1058) 00:20:18.904 3.627 - 3.650: 35.2148% ( 1435) 00:20:18.904 3.650 - 3.674: 42.5697% ( 976) 00:20:18.904 3.674 - 3.698: 49.8116% ( 961) 00:20:18.904 3.698 - 3.721: 55.6820% ( 779) 00:20:18.904 3.721 - 3.745: 60.6481% ( 659) 00:20:18.904 3.745 - 3.769: 64.9284% ( 568) 00:20:18.904 3.769 - 3.793: 68.7943% ( 513) 00:20:18.904 3.793 - 3.816: 72.1251% ( 442) 00:20:18.904 3.816 - 3.840: 75.7800% ( 485) 00:20:18.904 3.840 - 3.864: 79.5026% ( 494) 00:20:18.904 3.864 - 3.887: 82.5396% ( 403) 00:20:18.904 3.887 - 3.911: 85.4559% ( 387) 00:20:18.904 3.911 - 3.935: 87.6865% ( 296) 00:20:18.904 3.935 - 3.959: 89.5328% ( 245) 00:20:18.904 3.959 - 3.982: 91.2660% ( 230) 00:20:18.904 3.982 - 4.006: 92.8711% ( 213) 00:20:18.904 4.006 - 4.030: 93.9563% ( 144) 00:20:18.904 4.030 - 4.053: 94.9586% ( 133) 00:20:18.904 4.053 - 4.077: 95.6368% ( 90) 00:20:18.904 4.077 - 4.101: 96.0965% ( 61) 00:20:18.904 4.101 - 4.124: 96.4280% ( 44) 00:20:18.904 4.124 - 4.148: 96.6315% ( 27) 00:20:18.904 4.148 - 4.172: 96.7747% ( 19) 00:20:18.904 4.172 - 4.196: 96.8726% ( 13) 00:20:18.904 4.196 - 4.219: 96.9179% ( 6) 00:20:18.904 4.219 - 4.243: 97.0158% ( 13) 00:20:18.904 4.243 - 4.267: 97.1364% ( 16) 00:20:18.904 4.267 - 4.290: 97.2344% ( 13) 00:20:18.904 4.290 - 4.314: 97.3248% ( 12) 00:20:18.904 4.314 - 4.338: 97.3700% ( 6) 00:20:18.904 4.338 - 4.361: 97.4228% ( 7) 00:20:18.904 4.361 - 4.385: 97.4604% ( 5) 00:20:18.904 4.385 - 4.409: 97.4906% ( 4) 00:20:18.904 4.409 - 4.433: 97.5057% ( 2) 00:20:18.904 4.433 - 4.456: 97.5132% ( 1) 00:20:18.904 4.480 - 4.504: 97.5207% ( 1) 00:20:18.904 4.527 - 4.551: 97.5358% ( 2) 00:20:18.904 4.551 - 4.575: 97.5433% ( 1) 00:20:18.904 4.599 - 4.622: 97.5584% ( 2) 00:20:18.904 4.622 - 4.646: 97.5659% ( 1) 00:20:18.904 4.646 - 4.670: 97.5885% ( 3) 00:20:18.904 4.670 - 4.693: 97.6187% ( 4) 00:20:18.904 4.717 - 4.741: 97.6488% ( 4) 00:20:18.904 4.741 - 4.764: 97.6714% ( 3) 00:20:18.904 4.764 - 4.788: 97.7543% ( 11) 00:20:18.904 4.788 - 4.812: 97.7920% ( 5) 00:20:18.904 4.812 - 4.836: 97.8222% ( 4) 00:20:18.904 4.836 - 4.859: 97.8824% ( 8) 00:20:18.904 4.859 - 4.883: 97.9126% ( 4) 00:20:18.904 4.883 - 4.907: 97.9578% ( 6) 00:20:18.904 4.907 - 4.930: 98.0030% ( 6) 00:20:18.904 4.930 - 4.954: 98.0332% ( 4) 00:20:18.904 4.954 - 4.978: 98.0859% ( 7) 00:20:18.904 4.978 - 5.001: 98.1161% ( 4) 00:20:18.904 5.001 - 5.025: 98.1236% ( 1) 00:20:18.904 5.025 - 5.049: 98.1311% ( 1) 00:20:18.904 5.049 - 5.073: 98.1763% ( 6) 00:20:18.904 5.073 - 5.096: 98.2140% ( 5) 00:20:18.904 5.096 - 5.120: 98.2442% ( 4) 00:20:18.904 5.144 - 5.167: 98.2517% ( 1) 00:20:18.904 5.191 - 5.215: 98.2743% ( 3) 00:20:18.904 5.215 - 5.239: 98.2894% ( 2) 00:20:18.904 5.239 - 5.262: 98.3044% ( 2) 00:20:18.904 5.286 - 5.310: 98.3271% ( 3) 00:20:18.904 5.310 - 5.333: 98.3421% ( 2) 00:20:18.904 5.333 - 5.357: 98.3572% ( 2) 00:20:18.904 5.357 - 5.381: 98.3647% ( 1) 00:20:18.904 5.404 - 5.428: 98.3723% ( 1) 00:20:18.904 5.452 - 5.476: 98.3798% ( 1) 00:20:18.904 5.499 - 5.523: 98.3873% ( 1) 00:20:18.904 5.570 - 5.594: 98.3949% ( 1) 00:20:18.904 5.807 - 5.831: 98.4024% ( 1) 00:20:18.904 5.973 - 5.997: 98.4099% ( 1) 00:20:18.904 6.258 - 6.305: 98.4175% ( 1) 00:20:18.904 6.542 - 6.590: 98.4250% ( 1) 00:20:18.904 6.684 - 6.732: 98.4326% ( 1) 00:20:18.904 6.732 - 6.779: 98.4476% ( 2) 00:20:18.904 6.827 - 6.874: 98.4778% ( 4) 00:20:18.904 6.969 - 7.016: 98.4928% ( 2) 00:20:18.904 7.016 - 7.064: 98.5079% ( 2) 00:20:18.904 7.064 - 7.111: 98.5154% ( 1) 00:20:18.904 7.159 - 7.206: 98.5230% ( 1) 00:20:18.904 7.253 - 7.301: 98.5456% ( 3) 00:20:18.904 7.301 - 7.348: 98.5682% ( 3) 00:20:18.904 7.396 - 7.443: 98.5757% ( 1) 00:20:18.904 7.443 - 7.490: 98.5833% ( 1) 00:20:18.904 7.538 - 7.585: 98.5908% ( 1) 00:20:18.904 7.585 - 7.633: 98.5983% ( 1) 00:20:18.904 7.680 - 7.727: 98.6059% ( 1) 00:20:18.904 7.727 - 7.775: 98.6134% ( 1) 00:20:18.904 7.775 - 7.822: 98.6209% ( 1) 00:20:18.904 7.964 - 8.012: 98.6285% ( 1) 00:20:18.904 8.012 - 8.059: 98.6511% ( 3) 00:20:18.904 8.059 - 8.107: 98.6586% ( 1) 00:20:18.904 8.107 - 8.154: 98.6662% ( 1) 00:20:18.904 8.154 - 8.201: 98.6888% ( 3) 00:20:18.904 8.249 - 8.296: 98.7114% ( 3) 00:20:18.904 8.296 - 8.344: 98.7189% ( 1) 00:20:18.904 8.344 - 8.391: 98.7340% ( 2) 00:20:18.904 8.391 - 8.439: 98.7415% ( 1) 00:20:18.904 8.439 - 8.486: 98.7491% ( 1) 00:20:18.904 8.581 - 8.628: 98.7641% ( 2) 00:20:18.904 8.628 - 8.676: 98.7717% ( 1) 00:20:18.904 8.676 - 8.723: 98.7792% ( 1) 00:20:18.904 8.723 - 8.770: 98.7943% ( 2) 00:20:18.904 8.770 - 8.818: 98.8093% ( 2) 00:20:18.904 8.913 - 8.960: 98.8169% ( 1) 00:20:18.904 9.007 - 9.055: 98.8244% ( 1) 00:20:18.904 9.055 - 9.102: 98.8320% ( 1) 00:20:18.904 9.102 - 9.150: 98.8395% ( 1) 00:20:18.904 9.434 - 9.481: 98.8470% ( 1) 00:20:18.904 9.481 - 9.529: 98.8546% ( 1) 00:20:18.904 9.671 - 9.719: 98.8621% ( 1) 00:20:18.904 9.719 - 9.766: 98.8696% ( 1) 00:20:18.904 10.287 - 10.335: 98.8772% ( 1) 00:20:18.904 10.477 - 10.524: 98.8847% ( 1) 00:20:18.904 11.093 - 11.141: 98.8998% ( 2) 00:20:18.904 11.141 - 11.188: 98.9073% ( 1) 00:20:18.904 11.330 - 11.378: 98.9148% ( 1) 00:20:18.904 11.425 - 11.473: 98.9224% ( 1) 00:20:18.904 11.473 - 11.520: 98.9299% ( 1) 00:20:18.904 11.520 - 11.567: 98.9375% ( 1) 00:20:18.904 11.710 - 11.757: 98.9450% ( 1) 00:20:18.904 11.804 - 11.852: 98.9525% ( 1) 00:20:18.904 11.852 - 11.899: 98.9601% ( 1) 00:20:18.904 11.994 - 12.041: 98.9676% ( 1) 00:20:18.904 12.089 - 12.136: 98.9751% ( 1) 00:20:18.904 12.231 - 12.326: 98.9827% ( 1) 00:20:18.904 12.326 - 12.421: 98.9902% ( 1) 00:20:18.905 12.421 - 12.516: 98.9977% ( 1) 00:20:18.905 12.516 - 12.610: 99.0053% ( 1) 00:20:18.905 12.990 - 13.084: 99.0128% ( 1) 00:20:18.905 13.274 - 13.369: 99.0203% ( 1) 00:20:18.905 13.369 - 13.464: 99.0354% ( 2) 00:20:18.905 13.748 - 13.843: 99.0430% ( 1) 00:20:18.905 13.938 - 14.033: 99.0505% ( 1) 00:20:18.905 14.033 - 14.127: 99.0580% ( 1) 00:20:18.905 14.317 - 14.412: 99.0656% ( 1) 00:20:18.905 14.601 - 14.696: 99.0731% ( 1) 00:20:18.905 14.981 - 15.076: 99.0806% ( 1) 00:20:18.905 15.076 - 15.170: 99.0882% ( 1) 00:20:18.905 17.067 - 17.161: 99.0957% ( 1) 00:20:18.905 17.161 - 17.256: 99.1334% ( 5) 00:20:18.905 17.256 - 17.351: 99.1485% ( 2) 00:20:18.905 17.351 - 17.446: 99.1861% ( 5) 00:20:18.905 17.446 - 17.541: 99.1937% ( 1) 00:20:18.905 17.541 - 17.636: 99.2313% ( 5) 00:20:18.905 17.636 - 17.730: 99.2766% ( 6) 00:20:18.905 17.730 - 17.825: 99.2992% ( 3) 00:20:18.905 17.825 - 17.920: 99.3369% ( 5) 00:20:18.905 17.920 - 18.015: 99.3519% ( 2) 00:20:18.905 18.015 - 18.110: 99.4197% ( 9) 00:20:18.905 18.110 - 18.204: 99.4876% ( 9) 00:20:18.905 18.204 - 18.299: 99.5177% ( 4) 00:20:18.905 18.299 - 18.394: 99.5479% ( 4) 00:20:18.905 18.394 - 18.489: 99.6157% ( 9) 00:20:18.905 18.489 - 18.584: 99.6458% ( 4) 00:20:18.905 18.584 - 18.679: 99.6986% ( 7) 00:20:18.905 18.679 - 18.773: 99.7061% ( 1) 00:20:18.905 18.773 - 18.868: 99.7438% ( 5) 00:20:18.905 18.868 - 18.963: 99.7589% ( 2) 00:20:18.905 18.963 - 19.058: 99.7890% ( 4) 00:20:18.905 19.058 - 19.153: 99.8191% ( 4) 00:20:18.905 19.342 - 19.437: 99.8342% ( 2) 00:20:18.905 19.532 - 19.627: 99.8493% ( 2) 00:20:18.905 19.627 - 19.721: 99.8568% ( 1) 00:20:18.905 19.721 - 19.816: 99.8644% ( 1) 00:20:18.905 19.911 - 20.006: 99.8719% ( 1) 00:20:18.905 21.523 - 21.618: 99.8794% ( 1) 00:20:18.905 23.609 - 23.704: 99.8870% ( 1) 00:20:18.905 24.841 - 25.031: 99.8945% ( 1) 00:20:18.905 27.117 - 27.307: 99.9020% ( 1) 00:20:18.905 31.858 - 32.047: 99.9096% ( 1) 00:20:18.905 33.185 - 33.375: 99.9171% ( 1) 00:20:18.905 37.167 - 37.357: 99.9246% ( 1) 00:20:18.905 3980.705 - 4004.978: 99.9774% ( 7) 00:20:18.905 4004.978 - 4029.250: 100.0000% ( 3) 00:20:18.905 00:20:18.905 Complete histogram 00:20:18.905 ================== 00:20:18.905 Range in us Cumulative Count 00:20:18.905 2.039 - 2.050: 0.0075% ( 1) 00:20:18.905 2.050 - 2.062: 8.3346% ( 1105) 00:20:18.905 2.062 - 2.074: 42.6978% ( 4560) 00:20:18.905 2.074 - 2.086: 47.5132% ( 639) 00:20:18.905 2.086 - 2.098: 52.2155% ( 624) 00:20:18.905 2.098 - 2.110: 57.9804% ( 765) 00:20:18.905 2.110 - 2.121: 59.9096% ( 256) 00:20:18.905 2.121 - 2.133: 71.8990% ( 1591) 00:20:18.905 2.133 - 2.145: 80.3919% ( 1127) 00:20:18.905 2.145 - 2.157: 81.9668% ( 209) 00:20:18.905 2.157 - 2.169: 85.1997% ( 429) 00:20:18.905 2.169 - 2.181: 86.8726% ( 222) 00:20:18.905 2.181 - 2.193: 87.6790% ( 107) 00:20:18.905 2.193 - 2.204: 89.6232% ( 258) 00:20:18.905 2.204 - 2.216: 91.6428% ( 268) 00:20:18.905 2.216 - 2.228: 93.4137% ( 235) 00:20:18.905 2.228 - 2.240: 94.5365% ( 149) 00:20:18.905 2.240 - 2.252: 95.0038% ( 62) 00:20:18.905 2.252 - 2.264: 95.1696% ( 22) 00:20:18.905 2.264 - 2.276: 95.3127% ( 19) 00:20:18.905 2.276 - 2.287: 95.6217% ( 41) 00:20:18.905 2.287 - 2.299: 95.9005% ( 37) 00:20:18.905 2.299 - 2.311: 95.9759% ( 10) 00:20:18.905 2.311 - 2.323: 96.1266% ( 20) 00:20:18.905 2.323 - 2.335: 96.1869% ( 8) 00:20:18.905 2.335 - 2.347: 96.2547% ( 9) 00:20:18.905 2.347 - 2.359: 96.3451% ( 12) 00:20:18.905 2.359 - 2.370: 96.5787% ( 31) 00:20:18.905 2.370 - 2.382: 96.8500% ( 36) 00:20:18.905 2.382 - 2.394: 97.1138% ( 35) 00:20:18.905 2.394 - 2.406: 97.4228% ( 41) 00:20:18.905 2.406 - 2.418: 97.7468% ( 43) 00:20:18.905 2.418 - 2.430: 97.9201% ( 23) 00:20:18.905 2.430 - 2.441: 98.1085% ( 25) 00:20:18.905 2.441 - 2.453: 98.1989% ( 12) 00:20:18.905 2.453 - 2.465: 98.2818% ( 11) 00:20:18.905 2.465 - 2.477: 98.3271% ( 6) 00:20:18.905 2.477 - 2.489: 98.3798% ( 7) 00:20:18.905 2.489 - 2.501: 98.4099% ( 4) 00:20:18.905 2.501 - 2.513: 98.4476% ( 5) 00:20:18.905 2.513 - 2.524: 98.4778% ( 4) 00:20:18.905 2.524 - 2.536: 98.5154% ( 5) 00:20:18.905 2.536 - 2.548: 98.5230% ( 1) 00:20:18.905 2.548 - 2.560: 98.5381% ( 2) 00:20:18.905 2.560 - 2.572: 98.5456% ( 1) 00:20:18.905 2.643 - 2.655: 98.5607% ( 2) 00:20:18.905 2.679 - 2.690: 98.5682% ( 1) 00:20:18.905 2.738 - 2.750: 98.5757% ( 1) 00:20:18.905 2.785 - 2.797: 98.5833% ( 1) 00:20:18.905 2.809 - 2.821: 98.5908% ( 1) 00:20:18.905 3.342 - 3.366: 98.5983% ( 1) 00:20:18.905 3.366 - 3.390: 98.6059% ( 1) 00:20:18.905 3.461 - 3.484: 98.6209% ( 2) 00:20:18.905 3.484 - 3.508: 98.6511% ( 4) 00:20:18.905 3.508 - 3.532: 98.6586% ( 1) 00:20:18.905 3.532 - 3.556: 98.6662% ( 1) 00:20:18.905 3.556 - 3.579: 98.6888% ( 3) 00:20:18.905 3.579 - 3.603: 98.7038% ( 2) 00:20:18.905 3.603 - 3.627: 98.7189% ( 2) 00:20:18.905 3.627 - 3.650: 98.7340% ( 2) 00:20:18.905 3.721 - 3.745: 98.7415% ( 1) 00:20:18.905 3.769 - 3.793: 98.7491% ( 1) 00:20:18.905 3.816 - 3.840: 98.7566% ( 1) 00:20:18.905 3.911 - 3.935: 98.7641% ( 1) 00:20:18.905 3.982 - 4.006: 98.7717% ( 1) 00:20:18.905 4.006 - 4.030: 98.7792% ( 1) 00:20:18.905 4.101 - 4.124: 98.7867% ( 1) 00:20:18.905 4.124 - 4.148: 9[2024-11-05 12:34:47.842628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:18.905 8.7943% ( 1) 00:20:18.905 4.148 - 4.172: 98.8018% ( 1) 00:20:18.905 4.219 - 4.243: 98.8093% ( 1) 00:20:18.905 4.361 - 4.385: 98.8169% ( 1) 00:20:18.905 4.456 - 4.480: 98.8244% ( 1) 00:20:18.905 4.670 - 4.693: 98.8320% ( 1) 00:20:18.905 5.025 - 5.049: 98.8395% ( 1) 00:20:18.905 5.357 - 5.381: 98.8470% ( 1) 00:20:18.905 5.641 - 5.665: 98.8546% ( 1) 00:20:18.905 5.855 - 5.879: 98.8621% ( 1) 00:20:18.905 5.879 - 5.902: 98.8696% ( 1) 00:20:18.905 6.400 - 6.447: 98.8772% ( 1) 00:20:18.905 6.684 - 6.732: 98.8847% ( 1) 00:20:18.905 6.779 - 6.827: 98.8998% ( 2) 00:20:18.905 6.969 - 7.016: 98.9073% ( 1) 00:20:18.905 7.159 - 7.206: 98.9148% ( 1) 00:20:18.905 7.727 - 7.775: 98.9224% ( 1) 00:20:18.905 8.960 - 9.007: 98.9299% ( 1) 00:20:18.905 15.550 - 15.644: 98.9450% ( 2) 00:20:18.905 15.644 - 15.739: 98.9601% ( 2) 00:20:18.905 15.739 - 15.834: 98.9827% ( 3) 00:20:18.905 15.834 - 15.929: 99.0128% ( 4) 00:20:18.905 15.929 - 16.024: 99.0279% ( 2) 00:20:18.905 16.024 - 16.119: 99.0580% ( 4) 00:20:18.905 16.119 - 16.213: 99.1032% ( 6) 00:20:18.905 16.213 - 16.308: 99.1183% ( 2) 00:20:18.905 16.308 - 16.403: 99.1409% ( 3) 00:20:18.905 16.403 - 16.498: 99.1485% ( 1) 00:20:18.905 16.498 - 16.593: 99.1861% ( 5) 00:20:18.905 16.593 - 16.687: 99.2163% ( 4) 00:20:18.905 16.687 - 16.782: 99.2389% ( 3) 00:20:18.905 16.782 - 16.877: 99.2766% ( 5) 00:20:18.905 16.877 - 16.972: 99.2841% ( 1) 00:20:18.905 16.972 - 17.067: 99.3067% ( 3) 00:20:18.905 17.067 - 17.161: 99.3218% ( 2) 00:20:18.905 17.256 - 17.351: 99.3293% ( 1) 00:20:18.905 17.351 - 17.446: 99.3369% ( 1) 00:20:18.905 17.446 - 17.541: 99.3444% ( 1) 00:20:18.905 17.730 - 17.825: 99.3595% ( 2) 00:20:18.905 18.015 - 18.110: 99.3745% ( 2) 00:20:18.905 18.110 - 18.204: 99.3821% ( 1) 00:20:18.905 18.204 - 18.299: 99.3896% ( 1) 00:20:18.905 20.290 - 20.385: 99.3971% ( 1) 00:20:18.905 3325.345 - 3349.618: 99.4047% ( 1) 00:20:18.905 3980.705 - 4004.978: 99.8644% ( 61) 00:20:18.905 4004.978 - 4029.250: 100.0000% ( 18) 00:20:18.905 00:20:18.905 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:18.905 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:18.905 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:18.905 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:18.905 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:19.162 [ 00:20:19.162 { 00:20:19.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.162 "subtype": "Discovery", 00:20:19.162 "listen_addresses": [], 00:20:19.162 "allow_any_host": true, 00:20:19.162 "hosts": [] 00:20:19.162 }, 00:20:19.162 { 00:20:19.162 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:19.162 "subtype": "NVMe", 00:20:19.162 "listen_addresses": [ 00:20:19.162 { 00:20:19.162 "trtype": "VFIOUSER", 00:20:19.162 "adrfam": "IPv4", 00:20:19.162 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:19.162 "trsvcid": "0" 00:20:19.162 } 00:20:19.162 ], 00:20:19.162 "allow_any_host": true, 00:20:19.162 "hosts": [], 00:20:19.162 "serial_number": "SPDK1", 00:20:19.162 "model_number": "SPDK bdev Controller", 00:20:19.162 "max_namespaces": 32, 00:20:19.162 "min_cntlid": 1, 00:20:19.162 "max_cntlid": 65519, 00:20:19.162 "namespaces": [ 00:20:19.162 { 00:20:19.162 "nsid": 1, 00:20:19.162 "bdev_name": "Malloc1", 00:20:19.162 "name": "Malloc1", 00:20:19.162 "nguid": "A3342399AB7F45DFA338234649E5142D", 00:20:19.162 "uuid": "a3342399-ab7f-45df-a338-234649e5142d" 00:20:19.162 }, 00:20:19.162 { 00:20:19.162 "nsid": 2, 00:20:19.162 "bdev_name": "Malloc3", 00:20:19.162 "name": "Malloc3", 00:20:19.162 "nguid": "D5D90F3E879D49639524CD6DC3F06E0D", 00:20:19.162 "uuid": "d5d90f3e-879d-4963-9524-cd6dc3f06e0d" 00:20:19.162 } 00:20:19.162 ] 00:20:19.162 }, 00:20:19.162 { 00:20:19.162 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:19.162 "subtype": "NVMe", 00:20:19.162 "listen_addresses": [ 00:20:19.162 { 00:20:19.162 "trtype": "VFIOUSER", 00:20:19.162 "adrfam": "IPv4", 00:20:19.162 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:19.162 "trsvcid": "0" 00:20:19.162 } 00:20:19.162 ], 00:20:19.162 "allow_any_host": true, 00:20:19.162 "hosts": [], 00:20:19.162 "serial_number": "SPDK2", 00:20:19.162 "model_number": "SPDK bdev Controller", 00:20:19.162 "max_namespaces": 32, 00:20:19.162 "min_cntlid": 1, 00:20:19.162 "max_cntlid": 65519, 00:20:19.162 "namespaces": [ 00:20:19.162 { 00:20:19.162 "nsid": 1, 00:20:19.162 "bdev_name": "Malloc2", 00:20:19.162 "name": "Malloc2", 00:20:19.162 "nguid": "8312BD9FC273464595CAB7A035505BD7", 00:20:19.162 "uuid": "8312bd9f-c273-4645-95ca-b7a035505bd7" 00:20:19.162 } 00:20:19.162 ] 00:20:19.162 } 00:20:19.162 ] 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=647306 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:19.162 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:19.162 [2024-11-05 12:34:48.335963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:19.420 Malloc4 00:20:19.420 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:19.677 [2024-11-05 12:34:48.722779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:19.677 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:19.677 Asynchronous Event Request test 00:20:19.677 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:19.677 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:19.677 Registering asynchronous event callbacks... 00:20:19.677 Starting namespace attribute notice tests for all controllers... 00:20:19.677 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:19.677 aer_cb - Changed Namespace 00:20:19.677 Cleaning up... 00:20:19.935 [ 00:20:19.935 { 00:20:19.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.935 "subtype": "Discovery", 00:20:19.935 "listen_addresses": [], 00:20:19.935 "allow_any_host": true, 00:20:19.935 "hosts": [] 00:20:19.935 }, 00:20:19.935 { 00:20:19.935 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:19.935 "subtype": "NVMe", 00:20:19.935 "listen_addresses": [ 00:20:19.935 { 00:20:19.935 "trtype": "VFIOUSER", 00:20:19.935 "adrfam": "IPv4", 00:20:19.935 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:19.935 "trsvcid": "0" 00:20:19.935 } 00:20:19.935 ], 00:20:19.935 "allow_any_host": true, 00:20:19.935 "hosts": [], 00:20:19.935 "serial_number": "SPDK1", 00:20:19.935 "model_number": "SPDK bdev Controller", 00:20:19.935 "max_namespaces": 32, 00:20:19.935 "min_cntlid": 1, 00:20:19.935 "max_cntlid": 65519, 00:20:19.935 "namespaces": [ 00:20:19.935 { 00:20:19.935 "nsid": 1, 00:20:19.935 "bdev_name": "Malloc1", 00:20:19.935 "name": "Malloc1", 00:20:19.935 "nguid": "A3342399AB7F45DFA338234649E5142D", 00:20:19.935 "uuid": "a3342399-ab7f-45df-a338-234649e5142d" 00:20:19.935 }, 00:20:19.935 { 00:20:19.935 "nsid": 2, 00:20:19.935 "bdev_name": "Malloc3", 00:20:19.935 "name": "Malloc3", 00:20:19.935 "nguid": "D5D90F3E879D49639524CD6DC3F06E0D", 00:20:19.935 "uuid": "d5d90f3e-879d-4963-9524-cd6dc3f06e0d" 00:20:19.935 } 00:20:19.935 ] 00:20:19.935 }, 00:20:19.935 { 00:20:19.935 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:19.935 "subtype": "NVMe", 00:20:19.935 "listen_addresses": [ 00:20:19.935 { 00:20:19.935 "trtype": "VFIOUSER", 00:20:19.935 "adrfam": "IPv4", 00:20:19.935 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:19.935 "trsvcid": "0" 00:20:19.935 } 00:20:19.935 ], 00:20:19.935 "allow_any_host": true, 00:20:19.935 "hosts": [], 00:20:19.935 "serial_number": "SPDK2", 00:20:19.935 "model_number": "SPDK bdev Controller", 00:20:19.935 "max_namespaces": 32, 00:20:19.935 "min_cntlid": 1, 00:20:19.935 "max_cntlid": 65519, 00:20:19.935 "namespaces": [ 00:20:19.935 { 00:20:19.935 "nsid": 1, 00:20:19.935 "bdev_name": "Malloc2", 00:20:19.935 "name": "Malloc2", 00:20:19.935 "nguid": "8312BD9FC273464595CAB7A035505BD7", 00:20:19.935 "uuid": "8312bd9f-c273-4645-95ca-b7a035505bd7" 00:20:19.935 }, 00:20:19.935 { 00:20:19.935 "nsid": 2, 00:20:19.935 "bdev_name": "Malloc4", 00:20:19.935 "name": "Malloc4", 00:20:19.935 "nguid": "442D8B3C143D4C2C8D9676057C5E716C", 00:20:19.935 "uuid": "442d8b3c-143d-4c2c-8d96-76057c5e716c" 00:20:19.935 } 00:20:19.935 ] 00:20:19.935 } 00:20:19.935 ] 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 647306 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 641706 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 641706 ']' 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 641706 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 641706 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 641706' 00:20:19.935 killing process with pid 641706 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 641706 00:20:19.935 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 641706 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=647446 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 647446' 00:20:20.194 Process pid: 647446 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 647446 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 647446 ']' 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.194 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:20.194 [2024-11-05 12:34:49.401842] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:20.194 [2024-11-05 12:34:49.402876] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:20:20.194 [2024-11-05 12:34:49.402937] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.453 [2024-11-05 12:34:49.468468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.453 [2024-11-05 12:34:49.510583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.453 [2024-11-05 12:34:49.510659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.453 [2024-11-05 12:34:49.510672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.453 [2024-11-05 12:34:49.510697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.453 [2024-11-05 12:34:49.510706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.453 [2024-11-05 12:34:49.512141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.453 [2024-11-05 12:34:49.512264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.453 [2024-11-05 12:34:49.512326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.453 [2024-11-05 12:34:49.512329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.453 [2024-11-05 12:34:49.600597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:20.453 [2024-11-05 12:34:49.600932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:20.453 [2024-11-05 12:34:49.601097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:20.453 [2024-11-05 12:34:49.601765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:20.453 [2024-11-05 12:34:49.602064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:20.453 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.453 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:20:20.453 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:21.829 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:21.829 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:21.829 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:21.829 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:21.829 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:21.829 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:22.086 Malloc1 00:20:22.086 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:22.344 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:22.909 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:23.166 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:23.166 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:23.166 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:23.424 Malloc2 00:20:23.424 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:23.681 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:23.938 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 647446 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 647446 ']' 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 647446 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 647446 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 647446' 00:20:24.196 killing process with pid 647446 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 647446 00:20:24.196 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 647446 00:20:24.453 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:24.453 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:24.453 00:20:24.453 real 0m53.909s 00:20:24.453 user 3m28.531s 00:20:24.453 sys 0m3.952s 00:20:24.453 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.453 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:24.453 ************************************ 00:20:24.453 END TEST nvmf_vfio_user 00:20:24.453 ************************************ 00:20:24.454 12:34:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:24.454 12:34:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:24.454 12:34:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.454 12:34:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.454 ************************************ 00:20:24.454 START TEST nvmf_vfio_user_nvme_compliance 00:20:24.454 ************************************ 00:20:24.454 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:24.712 * Looking for test storage... 00:20:24.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.712 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:24.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.712 --rc genhtml_branch_coverage=1 00:20:24.712 --rc genhtml_function_coverage=1 00:20:24.712 --rc genhtml_legend=1 00:20:24.712 --rc geninfo_all_blocks=1 00:20:24.712 --rc geninfo_unexecuted_blocks=1 00:20:24.712 00:20:24.712 ' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.713 --rc genhtml_branch_coverage=1 00:20:24.713 --rc genhtml_function_coverage=1 00:20:24.713 --rc genhtml_legend=1 00:20:24.713 --rc geninfo_all_blocks=1 00:20:24.713 --rc geninfo_unexecuted_blocks=1 00:20:24.713 00:20:24.713 ' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.713 --rc genhtml_branch_coverage=1 00:20:24.713 --rc genhtml_function_coverage=1 00:20:24.713 --rc genhtml_legend=1 00:20:24.713 --rc geninfo_all_blocks=1 00:20:24.713 --rc geninfo_unexecuted_blocks=1 00:20:24.713 00:20:24.713 ' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:24.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.713 --rc genhtml_branch_coverage=1 00:20:24.713 --rc genhtml_function_coverage=1 00:20:24.713 --rc genhtml_legend=1 00:20:24.713 --rc geninfo_all_blocks=1 00:20:24.713 --rc geninfo_unexecuted_blocks=1 00:20:24.713 00:20:24.713 ' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=648070 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 648070' 00:20:24.713 Process pid: 648070 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 648070 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 648070 ']' 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.713 12:34:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:24.713 [2024-11-05 12:34:53.899966] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:20:24.713 [2024-11-05 12:34:53.900067] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.971 [2024-11-05 12:34:53.968513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:24.971 [2024-11-05 12:34:54.018440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.971 [2024-11-05 12:34:54.018508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.971 [2024-11-05 12:34:54.018536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.971 [2024-11-05 12:34:54.018547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.971 [2024-11-05 12:34:54.018557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.971 [2024-11-05 12:34:54.022879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.971 [2024-11-05 12:34:54.022947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.971 [2024-11-05 12:34:54.026888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.971 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:24.971 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:20:24.971 12:34:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 malloc0 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.343 12:34:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:26.343 00:20:26.343 00:20:26.343 CUnit - A unit testing framework for C - Version 2.1-3 00:20:26.343 http://cunit.sourceforge.net/ 00:20:26.343 00:20:26.344 00:20:26.344 Suite: nvme_compliance 00:20:26.344 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-05 12:34:55.401370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.344 [2024-11-05 12:34:55.402792] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:26.344 [2024-11-05 12:34:55.402816] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:26.344 [2024-11-05 12:34:55.402829] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:26.344 [2024-11-05 12:34:55.404389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.344 passed 00:20:26.344 Test: admin_identify_ctrlr_verify_fused ...[2024-11-05 12:34:55.492967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.344 [2024-11-05 12:34:55.495989] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.344 passed 00:20:26.601 Test: admin_identify_ns ...[2024-11-05 12:34:55.584988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.601 [2024-11-05 12:34:55.644876] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:26.601 [2024-11-05 12:34:55.652876] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:26.601 [2024-11-05 12:34:55.674017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.601 passed 00:20:26.601 Test: admin_get_features_mandatory_features ...[2024-11-05 12:34:55.757096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.601 [2024-11-05 12:34:55.760118] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.601 passed 00:20:26.858 Test: admin_get_features_optional_features ...[2024-11-05 12:34:55.843690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.858 [2024-11-05 12:34:55.848714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.859 passed 00:20:26.859 Test: admin_set_features_number_of_queues ...[2024-11-05 12:34:55.931993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:26.859 [2024-11-05 12:34:56.040991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:26.859 passed 00:20:27.116 Test: admin_get_log_page_mandatory_logs ...[2024-11-05 12:34:56.124605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.116 [2024-11-05 12:34:56.127629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.116 passed 00:20:27.116 Test: admin_get_log_page_with_lpo ...[2024-11-05 12:34:56.209596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.116 [2024-11-05 12:34:56.277873] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:27.116 [2024-11-05 12:34:56.290972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.116 passed 00:20:27.373 Test: fabric_property_get ...[2024-11-05 12:34:56.374515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.374 [2024-11-05 12:34:56.375786] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:27.374 [2024-11-05 12:34:56.377542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.374 passed 00:20:27.374 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-05 12:34:56.459097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.374 [2024-11-05 12:34:56.460366] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:27.374 [2024-11-05 12:34:56.463122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.374 passed 00:20:27.374 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-05 12:34:56.550588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.631 [2024-11-05 12:34:56.633871] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:27.631 [2024-11-05 12:34:56.649868] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:27.631 [2024-11-05 12:34:56.654977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.631 passed 00:20:27.631 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-05 12:34:56.739636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.631 [2024-11-05 12:34:56.740963] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:27.631 [2024-11-05 12:34:56.742661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.631 passed 00:20:27.631 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-05 12:34:56.825202] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.888 [2024-11-05 12:34:56.904867] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:27.888 [2024-11-05 12:34:56.928871] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:27.888 [2024-11-05 12:34:56.933975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.888 passed 00:20:27.888 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-05 12:34:57.017636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.888 [2024-11-05 12:34:57.018944] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:27.888 [2024-11-05 12:34:57.018984] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:27.888 [2024-11-05 12:34:57.020658] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.888 passed 00:20:27.888 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-05 12:34:57.107070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.146 [2024-11-05 12:34:57.199868] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:28.146 [2024-11-05 12:34:57.207867] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:28.146 [2024-11-05 12:34:57.215885] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:28.146 [2024-11-05 12:34:57.223869] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:28.146 [2024-11-05 12:34:57.252979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.146 passed 00:20:28.146 Test: admin_create_io_sq_verify_pc ...[2024-11-05 12:34:57.337590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.146 [2024-11-05 12:34:57.353883] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:28.146 [2024-11-05 12:34:57.371953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.403 passed 00:20:28.403 Test: admin_create_io_qp_max_qps ...[2024-11-05 12:34:57.455519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:29.336 [2024-11-05 12:34:58.537891] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:29.900 [2024-11-05 12:34:58.911835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:29.900 passed 00:20:29.900 Test: admin_create_io_sq_shared_cq ...[2024-11-05 12:34:58.996186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:29.900 [2024-11-05 12:34:59.127884] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:30.158 [2024-11-05 12:34:59.164969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:30.158 passed 00:20:30.158 00:20:30.158 Run Summary: Type Total Ran Passed Failed Inactive 00:20:30.158 suites 1 1 n/a 0 0 00:20:30.158 tests 18 18 18 0 0 00:20:30.158 asserts 360 360 360 0 n/a 00:20:30.158 00:20:30.158 Elapsed time = 1.557 seconds 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 648070 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 648070 ']' 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 648070 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 648070 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 648070' 00:20:30.158 killing process with pid 648070 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 648070 00:20:30.158 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 648070 00:20:30.415 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:30.415 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:30.415 00:20:30.415 real 0m5.778s 00:20:30.415 user 0m16.267s 00:20:30.415 sys 0m0.560s 00:20:30.415 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:30.415 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:30.415 ************************************ 00:20:30.415 END TEST nvmf_vfio_user_nvme_compliance 00:20:30.415 ************************************ 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.416 ************************************ 00:20:30.416 START TEST nvmf_vfio_user_fuzz 00:20:30.416 ************************************ 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:30.416 * Looking for test storage... 00:20:30.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:20:30.416 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:30.674 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.675 --rc genhtml_branch_coverage=1 00:20:30.675 --rc genhtml_function_coverage=1 00:20:30.675 --rc genhtml_legend=1 00:20:30.675 --rc geninfo_all_blocks=1 00:20:30.675 --rc geninfo_unexecuted_blocks=1 00:20:30.675 00:20:30.675 ' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.675 --rc genhtml_branch_coverage=1 00:20:30.675 --rc genhtml_function_coverage=1 00:20:30.675 --rc genhtml_legend=1 00:20:30.675 --rc geninfo_all_blocks=1 00:20:30.675 --rc geninfo_unexecuted_blocks=1 00:20:30.675 00:20:30.675 ' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.675 --rc genhtml_branch_coverage=1 00:20:30.675 --rc genhtml_function_coverage=1 00:20:30.675 --rc genhtml_legend=1 00:20:30.675 --rc geninfo_all_blocks=1 00:20:30.675 --rc geninfo_unexecuted_blocks=1 00:20:30.675 00:20:30.675 ' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:30.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.675 --rc genhtml_branch_coverage=1 00:20:30.675 --rc genhtml_function_coverage=1 00:20:30.675 --rc genhtml_legend=1 00:20:30.675 --rc geninfo_all_blocks=1 00:20:30.675 --rc geninfo_unexecuted_blocks=1 00:20:30.675 00:20:30.675 ' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=648796 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 648796' 00:20:30.675 Process pid: 648796 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 648796 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 648796 ']' 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.675 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:30.933 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.933 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:20:30.933 12:34:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:31.866 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:31.866 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.866 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.866 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.866 12:35:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.866 malloc0 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:31.866 12:35:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:03.963 Fuzzing completed. Shutting down the fuzz application 00:21:03.963 00:21:03.963 Dumping successful admin opcodes: 00:21:03.963 8, 9, 10, 24, 00:21:03.963 Dumping successful io opcodes: 00:21:03.963 0, 00:21:03.963 NS: 0x20000081ef00 I/O qp, Total commands completed: 668103, total successful commands: 2607, random_seed: 4217028096 00:21:03.963 NS: 0x20000081ef00 admin qp, Total commands completed: 85697, total successful commands: 685, random_seed: 489599872 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 648796 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 648796 ']' 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 648796 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 648796 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 648796' 00:21:03.963 killing process with pid 648796 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 648796 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 648796 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:03.963 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:03.963 00:21:03.963 real 0m32.191s 00:21:03.963 user 0m30.234s 00:21:03.963 sys 0m29.513s 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:03.964 ************************************ 00:21:03.964 END TEST nvmf_vfio_user_fuzz 00:21:03.964 ************************************ 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:03.964 ************************************ 00:21:03.964 START TEST nvmf_auth_target 00:21:03.964 ************************************ 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:03.964 * Looking for test storage... 00:21:03.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:03.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.964 --rc genhtml_branch_coverage=1 00:21:03.964 --rc genhtml_function_coverage=1 00:21:03.964 --rc genhtml_legend=1 00:21:03.964 --rc geninfo_all_blocks=1 00:21:03.964 --rc geninfo_unexecuted_blocks=1 00:21:03.964 00:21:03.964 ' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:03.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.964 --rc genhtml_branch_coverage=1 00:21:03.964 --rc genhtml_function_coverage=1 00:21:03.964 --rc genhtml_legend=1 00:21:03.964 --rc geninfo_all_blocks=1 00:21:03.964 --rc geninfo_unexecuted_blocks=1 00:21:03.964 00:21:03.964 ' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:03.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.964 --rc genhtml_branch_coverage=1 00:21:03.964 --rc genhtml_function_coverage=1 00:21:03.964 --rc genhtml_legend=1 00:21:03.964 --rc geninfo_all_blocks=1 00:21:03.964 --rc geninfo_unexecuted_blocks=1 00:21:03.964 00:21:03.964 ' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:03.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:03.964 --rc genhtml_branch_coverage=1 00:21:03.964 --rc genhtml_function_coverage=1 00:21:03.964 --rc genhtml_legend=1 00:21:03.964 --rc geninfo_all_blocks=1 00:21:03.964 --rc geninfo_unexecuted_blocks=1 00:21:03.964 00:21:03.964 ' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.964 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:03.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.965 12:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.900 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:04.901 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:04.901 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:04.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:04.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.901 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:21:04.901 00:21:04.901 --- 10.0.0.2 ping statistics --- 00:21:04.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.901 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:04.901 00:21:04.901 --- 10.0.0.1 ping statistics --- 00:21:04.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.901 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=654243 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 654243 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 654243 ']' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:04.901 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.160 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.160 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:05.160 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:05.160 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:05.160 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.418 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=654269 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e476c16fc468eed0405a07e265e13cda62700a4bafb2d261 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.e0M 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e476c16fc468eed0405a07e265e13cda62700a4bafb2d261 0 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e476c16fc468eed0405a07e265e13cda62700a4bafb2d261 0 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e476c16fc468eed0405a07e265e13cda62700a4bafb2d261 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.e0M 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.e0M 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.e0M 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=499208ad89f77557750cc54d706397cd937947c75bf106e110868d60d59f4064 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cev 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 499208ad89f77557750cc54d706397cd937947c75bf106e110868d60d59f4064 3 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 499208ad89f77557750cc54d706397cd937947c75bf106e110868d60d59f4064 3 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=499208ad89f77557750cc54d706397cd937947c75bf106e110868d60d59f4064 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cev 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cev 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.cev 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6437a1c3a4191123db72c9211fc3f6e7 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.wjI 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6437a1c3a4191123db72c9211fc3f6e7 1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6437a1c3a4191123db72c9211fc3f6e7 1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6437a1c3a4191123db72c9211fc3f6e7 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.wjI 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.wjI 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.wjI 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=39875867f281dfac21191aaf02686cf7d17fefe54f15fb12 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QW5 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 39875867f281dfac21191aaf02686cf7d17fefe54f15fb12 2 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 39875867f281dfac21191aaf02686cf7d17fefe54f15fb12 2 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=39875867f281dfac21191aaf02686cf7d17fefe54f15fb12 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QW5 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QW5 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.QW5 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e7bead214ef68872d9c51f6838da1041dae97e90dc566c3 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qZG 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e7bead214ef68872d9c51f6838da1041dae97e90dc566c3 2 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e7bead214ef68872d9c51f6838da1041dae97e90dc566c3 2 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e7bead214ef68872d9c51f6838da1041dae97e90dc566c3 00:21:05.419 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:05.420 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qZG 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qZG 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.qZG 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d471c39ab26ac470c91e4312ab768877 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Tcb 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d471c39ab26ac470c91e4312ab768877 1 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d471c39ab26ac470c91e4312ab768877 1 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d471c39ab26ac470c91e4312ab768877 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Tcb 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Tcb 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Tcb 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e912a2b9a44957379a03e9fca1d9f4e3ac5a3bf367c7eeef39e5ade95d15f50e 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oOL 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e912a2b9a44957379a03e9fca1d9f4e3ac5a3bf367c7eeef39e5ade95d15f50e 3 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e912a2b9a44957379a03e9fca1d9f4e3ac5a3bf367c7eeef39e5ade95d15f50e 3 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e912a2b9a44957379a03e9fca1d9f4e3ac5a3bf367c7eeef39e5ade95d15f50e 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:05.678 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oOL 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oOL 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.oOL 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 654243 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 654243 ']' 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.679 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 654269 /var/tmp/host.sock 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 654269 ']' 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:05.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.936 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.e0M 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.e0M 00:21:06.194 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.e0M 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.cev ]] 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cev 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cev 00:21:06.452 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cev 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wjI 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.wjI 00:21:06.710 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.wjI 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.QW5 ]] 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QW5 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QW5 00:21:06.968 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QW5 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qZG 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qZG 00:21:07.226 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qZG 00:21:07.484 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Tcb ]] 00:21:07.484 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tcb 00:21:07.484 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.484 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.742 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.742 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tcb 00:21:07.742 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tcb 00:21:07.999 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:07.999 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oOL 00:21:07.999 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.999 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.999 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.000 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.oOL 00:21:08.000 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.oOL 00:21:08.257 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:08.257 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:08.257 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.257 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.257 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:08.257 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.515 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.773 00:21:08.773 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.773 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.773 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.031 { 00:21:09.031 "cntlid": 1, 00:21:09.031 "qid": 0, 00:21:09.031 "state": "enabled", 00:21:09.031 "thread": "nvmf_tgt_poll_group_000", 00:21:09.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.031 "listen_address": { 00:21:09.031 "trtype": "TCP", 00:21:09.031 "adrfam": "IPv4", 00:21:09.031 "traddr": "10.0.0.2", 00:21:09.031 "trsvcid": "4420" 00:21:09.031 }, 00:21:09.031 "peer_address": { 00:21:09.031 "trtype": "TCP", 00:21:09.031 "adrfam": "IPv4", 00:21:09.031 "traddr": "10.0.0.1", 00:21:09.031 "trsvcid": "59710" 00:21:09.031 }, 00:21:09.031 "auth": { 00:21:09.031 "state": "completed", 00:21:09.031 "digest": "sha256", 00:21:09.031 "dhgroup": "null" 00:21:09.031 } 00:21:09.031 } 00:21:09.031 ]' 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.031 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.597 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:09.597 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:10.162 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:10.420 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.677 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.935 00:21:10.935 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.935 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.935 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.192 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.192 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.192 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.192 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.192 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.192 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.192 { 00:21:11.192 "cntlid": 3, 00:21:11.192 "qid": 0, 00:21:11.192 "state": "enabled", 00:21:11.192 "thread": "nvmf_tgt_poll_group_000", 00:21:11.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.192 "listen_address": { 00:21:11.192 "trtype": "TCP", 00:21:11.192 "adrfam": "IPv4", 00:21:11.192 "traddr": "10.0.0.2", 00:21:11.192 "trsvcid": "4420" 00:21:11.192 }, 00:21:11.192 "peer_address": { 00:21:11.192 "trtype": "TCP", 00:21:11.192 "adrfam": "IPv4", 00:21:11.192 "traddr": "10.0.0.1", 00:21:11.193 "trsvcid": "59738" 00:21:11.193 }, 00:21:11.193 "auth": { 00:21:11.193 "state": "completed", 00:21:11.193 "digest": "sha256", 00:21:11.193 "dhgroup": "null" 00:21:11.193 } 00:21:11.193 } 00:21:11.193 ]' 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.193 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.757 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:11.757 12:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.689 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.690 12:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.255 00:21:13.255 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.255 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.255 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.512 { 00:21:13.512 "cntlid": 5, 00:21:13.512 "qid": 0, 00:21:13.512 "state": "enabled", 00:21:13.512 "thread": "nvmf_tgt_poll_group_000", 00:21:13.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.512 "listen_address": { 00:21:13.512 "trtype": "TCP", 00:21:13.512 "adrfam": "IPv4", 00:21:13.512 "traddr": "10.0.0.2", 00:21:13.512 "trsvcid": "4420" 00:21:13.512 }, 00:21:13.512 "peer_address": { 00:21:13.512 "trtype": "TCP", 00:21:13.512 "adrfam": "IPv4", 00:21:13.512 "traddr": "10.0.0.1", 00:21:13.512 "trsvcid": "59764" 00:21:13.512 }, 00:21:13.512 "auth": { 00:21:13.512 "state": "completed", 00:21:13.512 "digest": "sha256", 00:21:13.512 "dhgroup": "null" 00:21:13.512 } 00:21:13.512 } 00:21:13.512 ]' 00:21:13.512 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.513 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.770 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:13.770 12:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:14.701 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.701 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.701 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.701 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.701 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.701 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.702 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:14.702 12:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.959 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.216 00:21:15.216 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.216 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.216 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.474 { 00:21:15.474 "cntlid": 7, 00:21:15.474 "qid": 0, 00:21:15.474 "state": "enabled", 00:21:15.474 "thread": "nvmf_tgt_poll_group_000", 00:21:15.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.474 "listen_address": { 00:21:15.474 "trtype": "TCP", 00:21:15.474 "adrfam": "IPv4", 00:21:15.474 "traddr": "10.0.0.2", 00:21:15.474 "trsvcid": "4420" 00:21:15.474 }, 00:21:15.474 "peer_address": { 00:21:15.474 "trtype": "TCP", 00:21:15.474 "adrfam": "IPv4", 00:21:15.474 "traddr": "10.0.0.1", 00:21:15.474 "trsvcid": "59790" 00:21:15.474 }, 00:21:15.474 "auth": { 00:21:15.474 "state": "completed", 00:21:15.474 "digest": "sha256", 00:21:15.474 "dhgroup": "null" 00:21:15.474 } 00:21:15.474 } 00:21:15.474 ]' 00:21:15.474 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.731 12:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.988 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:15.988 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:16.920 12:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.178 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.436 00:21:17.436 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.436 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.437 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.694 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.694 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.694 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.694 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.694 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.694 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.694 { 00:21:17.694 "cntlid": 9, 00:21:17.694 "qid": 0, 00:21:17.694 "state": "enabled", 00:21:17.694 "thread": "nvmf_tgt_poll_group_000", 00:21:17.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:17.694 "listen_address": { 00:21:17.694 "trtype": "TCP", 00:21:17.694 "adrfam": "IPv4", 00:21:17.694 "traddr": "10.0.0.2", 00:21:17.694 "trsvcid": "4420" 00:21:17.694 }, 00:21:17.694 "peer_address": { 00:21:17.694 "trtype": "TCP", 00:21:17.694 "adrfam": "IPv4", 00:21:17.694 "traddr": "10.0.0.1", 00:21:17.694 "trsvcid": "42882" 00:21:17.694 }, 00:21:17.694 "auth": { 00:21:17.694 "state": "completed", 00:21:17.694 "digest": "sha256", 00:21:17.695 "dhgroup": "ffdhe2048" 00:21:17.695 } 00:21:17.695 } 00:21:17.695 ]' 00:21:17.695 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.695 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.695 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.695 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:17.952 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.952 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.952 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.952 12:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.210 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:18.210 12:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:19.144 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.402 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.660 00:21:19.660 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.660 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.660 12:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.918 { 00:21:19.918 "cntlid": 11, 00:21:19.918 "qid": 0, 00:21:19.918 "state": "enabled", 00:21:19.918 "thread": "nvmf_tgt_poll_group_000", 00:21:19.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.918 "listen_address": { 00:21:19.918 "trtype": "TCP", 00:21:19.918 "adrfam": "IPv4", 00:21:19.918 "traddr": "10.0.0.2", 00:21:19.918 "trsvcid": "4420" 00:21:19.918 }, 00:21:19.918 "peer_address": { 00:21:19.918 "trtype": "TCP", 00:21:19.918 "adrfam": "IPv4", 00:21:19.918 "traddr": "10.0.0.1", 00:21:19.918 "trsvcid": "42912" 00:21:19.918 }, 00:21:19.918 "auth": { 00:21:19.918 "state": "completed", 00:21:19.918 "digest": "sha256", 00:21:19.918 "dhgroup": "ffdhe2048" 00:21:19.918 } 00:21:19.918 } 00:21:19.918 ]' 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.918 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.175 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.175 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.176 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.176 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.433 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:20.433 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:21.364 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.364 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.364 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.365 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.365 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.365 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.365 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:21.365 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.622 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.880 00:21:21.880 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.880 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.880 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.138 { 00:21:22.138 "cntlid": 13, 00:21:22.138 "qid": 0, 00:21:22.138 "state": "enabled", 00:21:22.138 "thread": "nvmf_tgt_poll_group_000", 00:21:22.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:22.138 "listen_address": { 00:21:22.138 "trtype": "TCP", 00:21:22.138 "adrfam": "IPv4", 00:21:22.138 "traddr": "10.0.0.2", 00:21:22.138 "trsvcid": "4420" 00:21:22.138 }, 00:21:22.138 "peer_address": { 00:21:22.138 "trtype": "TCP", 00:21:22.138 "adrfam": "IPv4", 00:21:22.138 "traddr": "10.0.0.1", 00:21:22.138 "trsvcid": "42934" 00:21:22.138 }, 00:21:22.138 "auth": { 00:21:22.138 "state": "completed", 00:21:22.138 "digest": "sha256", 00:21:22.138 "dhgroup": "ffdhe2048" 00:21:22.138 } 00:21:22.138 } 00:21:22.138 ]' 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.138 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.411 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:22.411 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:23.347 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.605 12:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.170 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.170 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.427 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.427 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.427 { 00:21:24.427 "cntlid": 15, 00:21:24.427 "qid": 0, 00:21:24.427 "state": "enabled", 00:21:24.427 "thread": "nvmf_tgt_poll_group_000", 00:21:24.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.427 "listen_address": { 00:21:24.427 "trtype": "TCP", 00:21:24.427 "adrfam": "IPv4", 00:21:24.427 "traddr": "10.0.0.2", 00:21:24.427 "trsvcid": "4420" 00:21:24.427 }, 00:21:24.427 "peer_address": { 00:21:24.427 "trtype": "TCP", 00:21:24.427 "adrfam": "IPv4", 00:21:24.427 "traddr": "10.0.0.1", 00:21:24.427 "trsvcid": "42972" 00:21:24.427 }, 00:21:24.427 "auth": { 00:21:24.427 "state": "completed", 00:21:24.427 "digest": "sha256", 00:21:24.427 "dhgroup": "ffdhe2048" 00:21:24.427 } 00:21:24.427 } 00:21:24.427 ]' 00:21:24.427 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.427 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.427 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.427 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.428 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.428 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.428 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.428 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.685 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:24.685 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:25.618 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.876 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.134 00:21:26.391 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.391 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.391 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.649 { 00:21:26.649 "cntlid": 17, 00:21:26.649 "qid": 0, 00:21:26.649 "state": "enabled", 00:21:26.649 "thread": "nvmf_tgt_poll_group_000", 00:21:26.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.649 "listen_address": { 00:21:26.649 "trtype": "TCP", 00:21:26.649 "adrfam": "IPv4", 00:21:26.649 "traddr": "10.0.0.2", 00:21:26.649 "trsvcid": "4420" 00:21:26.649 }, 00:21:26.649 "peer_address": { 00:21:26.649 "trtype": "TCP", 00:21:26.649 "adrfam": "IPv4", 00:21:26.649 "traddr": "10.0.0.1", 00:21:26.649 "trsvcid": "43014" 00:21:26.649 }, 00:21:26.649 "auth": { 00:21:26.649 "state": "completed", 00:21:26.649 "digest": "sha256", 00:21:26.649 "dhgroup": "ffdhe3072" 00:21:26.649 } 00:21:26.649 } 00:21:26.649 ]' 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.649 12:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.907 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:26.907 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:27.839 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.097 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.354 00:21:28.354 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.354 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.354 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.950 { 00:21:28.950 "cntlid": 19, 00:21:28.950 "qid": 0, 00:21:28.950 "state": "enabled", 00:21:28.950 "thread": "nvmf_tgt_poll_group_000", 00:21:28.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.950 "listen_address": { 00:21:28.950 "trtype": "TCP", 00:21:28.950 "adrfam": "IPv4", 00:21:28.950 "traddr": "10.0.0.2", 00:21:28.950 "trsvcid": "4420" 00:21:28.950 }, 00:21:28.950 "peer_address": { 00:21:28.950 "trtype": "TCP", 00:21:28.950 "adrfam": "IPv4", 00:21:28.950 "traddr": "10.0.0.1", 00:21:28.950 "trsvcid": "42000" 00:21:28.950 }, 00:21:28.950 "auth": { 00:21:28.950 "state": "completed", 00:21:28.950 "digest": "sha256", 00:21:28.950 "dhgroup": "ffdhe3072" 00:21:28.950 } 00:21:28.950 } 00:21:28.950 ]' 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.950 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.234 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:29.234 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:30.168 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.426 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.683 00:21:30.683 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.683 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.683 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.941 { 00:21:30.941 "cntlid": 21, 00:21:30.941 "qid": 0, 00:21:30.941 "state": "enabled", 00:21:30.941 "thread": "nvmf_tgt_poll_group_000", 00:21:30.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.941 "listen_address": { 00:21:30.941 "trtype": "TCP", 00:21:30.941 "adrfam": "IPv4", 00:21:30.941 "traddr": "10.0.0.2", 00:21:30.941 "trsvcid": "4420" 00:21:30.941 }, 00:21:30.941 "peer_address": { 00:21:30.941 "trtype": "TCP", 00:21:30.941 "adrfam": "IPv4", 00:21:30.941 "traddr": "10.0.0.1", 00:21:30.941 "trsvcid": "42022" 00:21:30.941 }, 00:21:30.941 "auth": { 00:21:30.941 "state": "completed", 00:21:30.941 "digest": "sha256", 00:21:30.941 "dhgroup": "ffdhe3072" 00:21:30.941 } 00:21:30.941 } 00:21:30.941 ]' 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.941 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.505 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:31.505 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.439 12:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.064 00:21:33.064 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.064 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.064 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.321 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.321 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.321 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.321 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.321 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.321 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.321 { 00:21:33.321 "cntlid": 23, 00:21:33.321 "qid": 0, 00:21:33.321 "state": "enabled", 00:21:33.322 "thread": "nvmf_tgt_poll_group_000", 00:21:33.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.322 "listen_address": { 00:21:33.322 "trtype": "TCP", 00:21:33.322 "adrfam": "IPv4", 00:21:33.322 "traddr": "10.0.0.2", 00:21:33.322 "trsvcid": "4420" 00:21:33.322 }, 00:21:33.322 "peer_address": { 00:21:33.322 "trtype": "TCP", 00:21:33.322 "adrfam": "IPv4", 00:21:33.322 "traddr": "10.0.0.1", 00:21:33.322 "trsvcid": "42032" 00:21:33.322 }, 00:21:33.322 "auth": { 00:21:33.322 "state": "completed", 00:21:33.322 "digest": "sha256", 00:21:33.322 "dhgroup": "ffdhe3072" 00:21:33.322 } 00:21:33.322 } 00:21:33.322 ]' 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.322 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.579 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:33.579 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.512 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.770 12:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.335 00:21:35.335 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.335 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.335 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.593 { 00:21:35.593 "cntlid": 25, 00:21:35.593 "qid": 0, 00:21:35.593 "state": "enabled", 00:21:35.593 "thread": "nvmf_tgt_poll_group_000", 00:21:35.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.593 "listen_address": { 00:21:35.593 "trtype": "TCP", 00:21:35.593 "adrfam": "IPv4", 00:21:35.593 "traddr": "10.0.0.2", 00:21:35.593 "trsvcid": "4420" 00:21:35.593 }, 00:21:35.593 "peer_address": { 00:21:35.593 "trtype": "TCP", 00:21:35.593 "adrfam": "IPv4", 00:21:35.593 "traddr": "10.0.0.1", 00:21:35.593 "trsvcid": "42076" 00:21:35.593 }, 00:21:35.593 "auth": { 00:21:35.593 "state": "completed", 00:21:35.593 "digest": "sha256", 00:21:35.593 "dhgroup": "ffdhe4096" 00:21:35.593 } 00:21:35.593 } 00:21:35.593 ]' 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.593 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.851 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:35.851 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:36.783 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:37.041 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.042 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.607 00:21:37.607 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.607 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.607 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.864 { 00:21:37.864 "cntlid": 27, 00:21:37.864 "qid": 0, 00:21:37.864 "state": "enabled", 00:21:37.864 "thread": "nvmf_tgt_poll_group_000", 00:21:37.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.864 "listen_address": { 00:21:37.864 "trtype": "TCP", 00:21:37.864 "adrfam": "IPv4", 00:21:37.864 "traddr": "10.0.0.2", 00:21:37.864 "trsvcid": "4420" 00:21:37.864 }, 00:21:37.864 "peer_address": { 00:21:37.864 "trtype": "TCP", 00:21:37.864 "adrfam": "IPv4", 00:21:37.864 "traddr": "10.0.0.1", 00:21:37.864 "trsvcid": "33448" 00:21:37.864 }, 00:21:37.864 "auth": { 00:21:37.864 "state": "completed", 00:21:37.864 "digest": "sha256", 00:21:37.864 "dhgroup": "ffdhe4096" 00:21:37.864 } 00:21:37.864 } 00:21:37.864 ]' 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:37.864 12:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.865 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:37.865 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.865 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.865 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.865 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.430 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:38.430 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:38.996 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:39.254 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.512 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.770 00:21:39.770 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.770 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.770 12:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.028 { 00:21:40.028 "cntlid": 29, 00:21:40.028 "qid": 0, 00:21:40.028 "state": "enabled", 00:21:40.028 "thread": "nvmf_tgt_poll_group_000", 00:21:40.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.028 "listen_address": { 00:21:40.028 "trtype": "TCP", 00:21:40.028 "adrfam": "IPv4", 00:21:40.028 "traddr": "10.0.0.2", 00:21:40.028 "trsvcid": "4420" 00:21:40.028 }, 00:21:40.028 "peer_address": { 00:21:40.028 "trtype": "TCP", 00:21:40.028 "adrfam": "IPv4", 00:21:40.028 "traddr": "10.0.0.1", 00:21:40.028 "trsvcid": "33458" 00:21:40.028 }, 00:21:40.028 "auth": { 00:21:40.028 "state": "completed", 00:21:40.028 "digest": "sha256", 00:21:40.028 "dhgroup": "ffdhe4096" 00:21:40.028 } 00:21:40.028 } 00:21:40.028 ]' 00:21:40.028 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.285 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.285 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.285 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.285 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.285 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.286 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.286 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.544 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:40.544 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:41.477 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.735 12:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.993 00:21:41.993 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.993 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.993 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.251 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.251 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.251 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.251 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.508 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.508 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.508 { 00:21:42.508 "cntlid": 31, 00:21:42.508 "qid": 0, 00:21:42.508 "state": "enabled", 00:21:42.508 "thread": "nvmf_tgt_poll_group_000", 00:21:42.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.508 "listen_address": { 00:21:42.508 "trtype": "TCP", 00:21:42.509 "adrfam": "IPv4", 00:21:42.509 "traddr": "10.0.0.2", 00:21:42.509 "trsvcid": "4420" 00:21:42.509 }, 00:21:42.509 "peer_address": { 00:21:42.509 "trtype": "TCP", 00:21:42.509 "adrfam": "IPv4", 00:21:42.509 "traddr": "10.0.0.1", 00:21:42.509 "trsvcid": "33490" 00:21:42.509 }, 00:21:42.509 "auth": { 00:21:42.509 "state": "completed", 00:21:42.509 "digest": "sha256", 00:21:42.509 "dhgroup": "ffdhe4096" 00:21:42.509 } 00:21:42.509 } 00:21:42.509 ]' 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.509 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.766 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:42.766 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:43.699 12:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.957 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.522 00:21:44.522 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.522 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.522 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.780 { 00:21:44.780 "cntlid": 33, 00:21:44.780 "qid": 0, 00:21:44.780 "state": "enabled", 00:21:44.780 "thread": "nvmf_tgt_poll_group_000", 00:21:44.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.780 "listen_address": { 00:21:44.780 "trtype": "TCP", 00:21:44.780 "adrfam": "IPv4", 00:21:44.780 "traddr": "10.0.0.2", 00:21:44.780 "trsvcid": "4420" 00:21:44.780 }, 00:21:44.780 "peer_address": { 00:21:44.780 "trtype": "TCP", 00:21:44.780 "adrfam": "IPv4", 00:21:44.780 "traddr": "10.0.0.1", 00:21:44.780 "trsvcid": "33516" 00:21:44.780 }, 00:21:44.780 "auth": { 00:21:44.780 "state": "completed", 00:21:44.780 "digest": "sha256", 00:21:44.780 "dhgroup": "ffdhe6144" 00:21:44.780 } 00:21:44.780 } 00:21:44.780 ]' 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.780 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.780 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.780 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.780 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.345 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:45.345 12:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.278 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.843 00:21:46.843 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.843 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.843 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.101 { 00:21:47.101 "cntlid": 35, 00:21:47.101 "qid": 0, 00:21:47.101 "state": "enabled", 00:21:47.101 "thread": "nvmf_tgt_poll_group_000", 00:21:47.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.101 "listen_address": { 00:21:47.101 "trtype": "TCP", 00:21:47.101 "adrfam": "IPv4", 00:21:47.101 "traddr": "10.0.0.2", 00:21:47.101 "trsvcid": "4420" 00:21:47.101 }, 00:21:47.101 "peer_address": { 00:21:47.101 "trtype": "TCP", 00:21:47.101 "adrfam": "IPv4", 00:21:47.101 "traddr": "10.0.0.1", 00:21:47.101 "trsvcid": "33536" 00:21:47.101 }, 00:21:47.101 "auth": { 00:21:47.101 "state": "completed", 00:21:47.101 "digest": "sha256", 00:21:47.101 "dhgroup": "ffdhe6144" 00:21:47.101 } 00:21:47.101 } 00:21:47.101 ]' 00:21:47.101 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.359 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.616 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:47.616 12:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.549 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.807 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.372 00:21:49.372 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.372 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.372 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.630 { 00:21:49.630 "cntlid": 37, 00:21:49.630 "qid": 0, 00:21:49.630 "state": "enabled", 00:21:49.630 "thread": "nvmf_tgt_poll_group_000", 00:21:49.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.630 "listen_address": { 00:21:49.630 "trtype": "TCP", 00:21:49.630 "adrfam": "IPv4", 00:21:49.630 "traddr": "10.0.0.2", 00:21:49.630 "trsvcid": "4420" 00:21:49.630 }, 00:21:49.630 "peer_address": { 00:21:49.630 "trtype": "TCP", 00:21:49.630 "adrfam": "IPv4", 00:21:49.630 "traddr": "10.0.0.1", 00:21:49.630 "trsvcid": "46122" 00:21:49.630 }, 00:21:49.630 "auth": { 00:21:49.630 "state": "completed", 00:21:49.630 "digest": "sha256", 00:21:49.630 "dhgroup": "ffdhe6144" 00:21:49.630 } 00:21:49.630 } 00:21:49.630 ]' 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.630 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.888 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:49.888 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:50.820 12:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.078 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.643 00:21:51.643 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.643 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.643 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.900 { 00:21:51.900 "cntlid": 39, 00:21:51.900 "qid": 0, 00:21:51.900 "state": "enabled", 00:21:51.900 "thread": "nvmf_tgt_poll_group_000", 00:21:51.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.900 "listen_address": { 00:21:51.900 "trtype": "TCP", 00:21:51.900 "adrfam": "IPv4", 00:21:51.900 "traddr": "10.0.0.2", 00:21:51.900 "trsvcid": "4420" 00:21:51.900 }, 00:21:51.900 "peer_address": { 00:21:51.900 "trtype": "TCP", 00:21:51.900 "adrfam": "IPv4", 00:21:51.900 "traddr": "10.0.0.1", 00:21:51.900 "trsvcid": "46164" 00:21:51.900 }, 00:21:51.900 "auth": { 00:21:51.900 "state": "completed", 00:21:51.900 "digest": "sha256", 00:21:51.900 "dhgroup": "ffdhe6144" 00:21:51.900 } 00:21:51.900 } 00:21:51.900 ]' 00:21:51.900 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.158 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.416 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:52.416 12:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.347 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.605 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.537 00:21:54.537 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.537 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.537 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.794 { 00:21:54.794 "cntlid": 41, 00:21:54.794 "qid": 0, 00:21:54.794 "state": "enabled", 00:21:54.794 "thread": "nvmf_tgt_poll_group_000", 00:21:54.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.794 "listen_address": { 00:21:54.794 "trtype": "TCP", 00:21:54.794 "adrfam": "IPv4", 00:21:54.794 "traddr": "10.0.0.2", 00:21:54.794 "trsvcid": "4420" 00:21:54.794 }, 00:21:54.794 "peer_address": { 00:21:54.794 "trtype": "TCP", 00:21:54.794 "adrfam": "IPv4", 00:21:54.794 "traddr": "10.0.0.1", 00:21:54.794 "trsvcid": "46188" 00:21:54.794 }, 00:21:54.794 "auth": { 00:21:54.794 "state": "completed", 00:21:54.794 "digest": "sha256", 00:21:54.794 "dhgroup": "ffdhe8192" 00:21:54.794 } 00:21:54.794 } 00:21:54.794 ]' 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.794 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.795 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.795 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.795 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.795 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.795 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.795 12:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.052 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:55.052 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:55.984 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.242 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.243 12:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.174 00:21:57.174 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.174 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.174 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.432 { 00:21:57.432 "cntlid": 43, 00:21:57.432 "qid": 0, 00:21:57.432 "state": "enabled", 00:21:57.432 "thread": "nvmf_tgt_poll_group_000", 00:21:57.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.432 "listen_address": { 00:21:57.432 "trtype": "TCP", 00:21:57.432 "adrfam": "IPv4", 00:21:57.432 "traddr": "10.0.0.2", 00:21:57.432 "trsvcid": "4420" 00:21:57.432 }, 00:21:57.432 "peer_address": { 00:21:57.432 "trtype": "TCP", 00:21:57.432 "adrfam": "IPv4", 00:21:57.432 "traddr": "10.0.0.1", 00:21:57.432 "trsvcid": "46218" 00:21:57.432 }, 00:21:57.432 "auth": { 00:21:57.432 "state": "completed", 00:21:57.432 "digest": "sha256", 00:21:57.432 "dhgroup": "ffdhe8192" 00:21:57.432 } 00:21:57.432 } 00:21:57.432 ]' 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.432 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.690 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:57.690 12:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:58.622 12:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.881 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.900 00:21:59.900 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.900 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.900 12:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.158 { 00:22:00.158 "cntlid": 45, 00:22:00.158 "qid": 0, 00:22:00.158 "state": "enabled", 00:22:00.158 "thread": "nvmf_tgt_poll_group_000", 00:22:00.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.158 "listen_address": { 00:22:00.158 "trtype": "TCP", 00:22:00.158 "adrfam": "IPv4", 00:22:00.158 "traddr": "10.0.0.2", 00:22:00.158 "trsvcid": "4420" 00:22:00.158 }, 00:22:00.158 "peer_address": { 00:22:00.158 "trtype": "TCP", 00:22:00.158 "adrfam": "IPv4", 00:22:00.158 "traddr": "10.0.0.1", 00:22:00.158 "trsvcid": "48166" 00:22:00.158 }, 00:22:00.158 "auth": { 00:22:00.158 "state": "completed", 00:22:00.158 "digest": "sha256", 00:22:00.158 "dhgroup": "ffdhe8192" 00:22:00.158 } 00:22:00.158 } 00:22:00.158 ]' 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.158 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.416 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:00.416 12:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.347 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.605 12:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.539 00:22:02.539 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.539 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.539 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.798 { 00:22:02.798 "cntlid": 47, 00:22:02.798 "qid": 0, 00:22:02.798 "state": "enabled", 00:22:02.798 "thread": "nvmf_tgt_poll_group_000", 00:22:02.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:02.798 "listen_address": { 00:22:02.798 "trtype": "TCP", 00:22:02.798 "adrfam": "IPv4", 00:22:02.798 "traddr": "10.0.0.2", 00:22:02.798 "trsvcid": "4420" 00:22:02.798 }, 00:22:02.798 "peer_address": { 00:22:02.798 "trtype": "TCP", 00:22:02.798 "adrfam": "IPv4", 00:22:02.798 "traddr": "10.0.0.1", 00:22:02.798 "trsvcid": "48182" 00:22:02.798 }, 00:22:02.798 "auth": { 00:22:02.798 "state": "completed", 00:22:02.798 "digest": "sha256", 00:22:02.798 "dhgroup": "ffdhe8192" 00:22:02.798 } 00:22:02.798 } 00:22:02.798 ]' 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.798 12:36:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.056 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:03.056 12:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:03.991 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.992 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.992 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:03.992 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.250 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.817 00:22:04.817 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.817 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.817 12:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.075 { 00:22:05.075 "cntlid": 49, 00:22:05.075 "qid": 0, 00:22:05.075 "state": "enabled", 00:22:05.075 "thread": "nvmf_tgt_poll_group_000", 00:22:05.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.075 "listen_address": { 00:22:05.075 "trtype": "TCP", 00:22:05.075 "adrfam": "IPv4", 00:22:05.075 "traddr": "10.0.0.2", 00:22:05.075 "trsvcid": "4420" 00:22:05.075 }, 00:22:05.075 "peer_address": { 00:22:05.075 "trtype": "TCP", 00:22:05.075 "adrfam": "IPv4", 00:22:05.075 "traddr": "10.0.0.1", 00:22:05.075 "trsvcid": "48212" 00:22:05.075 }, 00:22:05.075 "auth": { 00:22:05.075 "state": "completed", 00:22:05.075 "digest": "sha384", 00:22:05.075 "dhgroup": "null" 00:22:05.075 } 00:22:05.075 } 00:22:05.075 ]' 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.075 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.333 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:05.334 12:36:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:06.267 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:06.525 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:06.525 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.525 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:06.525 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:06.525 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:06.525 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.526 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.801 00:22:06.801 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.801 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.801 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.059 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.059 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.059 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.059 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.059 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.059 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.059 { 00:22:07.059 "cntlid": 51, 00:22:07.059 "qid": 0, 00:22:07.059 "state": "enabled", 00:22:07.059 "thread": "nvmf_tgt_poll_group_000", 00:22:07.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.059 "listen_address": { 00:22:07.059 "trtype": "TCP", 00:22:07.059 "adrfam": "IPv4", 00:22:07.059 "traddr": "10.0.0.2", 00:22:07.059 "trsvcid": "4420" 00:22:07.059 }, 00:22:07.059 "peer_address": { 00:22:07.059 "trtype": "TCP", 00:22:07.059 "adrfam": "IPv4", 00:22:07.059 "traddr": "10.0.0.1", 00:22:07.059 "trsvcid": "48236" 00:22:07.059 }, 00:22:07.059 "auth": { 00:22:07.060 "state": "completed", 00:22:07.060 "digest": "sha384", 00:22:07.060 "dhgroup": "null" 00:22:07.060 } 00:22:07.060 } 00:22:07.060 ]' 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.318 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.576 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:07.576 12:36:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:08.511 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.769 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.770 12:36:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.028 00:22:09.028 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.028 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.028 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.287 { 00:22:09.287 "cntlid": 53, 00:22:09.287 "qid": 0, 00:22:09.287 "state": "enabled", 00:22:09.287 "thread": "nvmf_tgt_poll_group_000", 00:22:09.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:09.287 "listen_address": { 00:22:09.287 "trtype": "TCP", 00:22:09.287 "adrfam": "IPv4", 00:22:09.287 "traddr": "10.0.0.2", 00:22:09.287 "trsvcid": "4420" 00:22:09.287 }, 00:22:09.287 "peer_address": { 00:22:09.287 "trtype": "TCP", 00:22:09.287 "adrfam": "IPv4", 00:22:09.287 "traddr": "10.0.0.1", 00:22:09.287 "trsvcid": "51202" 00:22:09.287 }, 00:22:09.287 "auth": { 00:22:09.287 "state": "completed", 00:22:09.287 "digest": "sha384", 00:22:09.287 "dhgroup": "null" 00:22:09.287 } 00:22:09.287 } 00:22:09.287 ]' 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:09.287 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.545 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.545 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.545 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.803 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:09.803 12:36:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:10.738 12:36:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.996 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.255 00:22:11.255 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.255 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.255 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.513 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.513 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.513 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.513 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.513 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.513 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.513 { 00:22:11.513 "cntlid": 55, 00:22:11.513 "qid": 0, 00:22:11.513 "state": "enabled", 00:22:11.513 "thread": "nvmf_tgt_poll_group_000", 00:22:11.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.513 "listen_address": { 00:22:11.513 "trtype": "TCP", 00:22:11.513 "adrfam": "IPv4", 00:22:11.513 "traddr": "10.0.0.2", 00:22:11.513 "trsvcid": "4420" 00:22:11.513 }, 00:22:11.513 "peer_address": { 00:22:11.513 "trtype": "TCP", 00:22:11.513 "adrfam": "IPv4", 00:22:11.513 "traddr": "10.0.0.1", 00:22:11.513 "trsvcid": "51236" 00:22:11.514 }, 00:22:11.514 "auth": { 00:22:11.514 "state": "completed", 00:22:11.514 "digest": "sha384", 00:22:11.514 "dhgroup": "null" 00:22:11.514 } 00:22:11.514 } 00:22:11.514 ]' 00:22:11.514 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.514 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.514 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.514 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:11.514 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.772 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.772 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.772 12:36:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.030 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:12.030 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:12.966 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.966 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:12.967 12:36:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.967 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.533 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.533 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.790 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.790 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.790 { 00:22:13.790 "cntlid": 57, 00:22:13.790 "qid": 0, 00:22:13.790 "state": "enabled", 00:22:13.790 "thread": "nvmf_tgt_poll_group_000", 00:22:13.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.791 "listen_address": { 00:22:13.791 "trtype": "TCP", 00:22:13.791 "adrfam": "IPv4", 00:22:13.791 "traddr": "10.0.0.2", 00:22:13.791 "trsvcid": "4420" 00:22:13.791 }, 00:22:13.791 "peer_address": { 00:22:13.791 "trtype": "TCP", 00:22:13.791 "adrfam": "IPv4", 00:22:13.791 "traddr": "10.0.0.1", 00:22:13.791 "trsvcid": "51274" 00:22:13.791 }, 00:22:13.791 "auth": { 00:22:13.791 "state": "completed", 00:22:13.791 "digest": "sha384", 00:22:13.791 "dhgroup": "ffdhe2048" 00:22:13.791 } 00:22:13.791 } 00:22:13.791 ]' 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.791 12:36:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.048 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:14.048 12:36:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:14.981 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.240 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.498 00:22:15.498 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.498 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.499 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.758 { 00:22:15.758 "cntlid": 59, 00:22:15.758 "qid": 0, 00:22:15.758 "state": "enabled", 00:22:15.758 "thread": "nvmf_tgt_poll_group_000", 00:22:15.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.758 "listen_address": { 00:22:15.758 "trtype": "TCP", 00:22:15.758 "adrfam": "IPv4", 00:22:15.758 "traddr": "10.0.0.2", 00:22:15.758 "trsvcid": "4420" 00:22:15.758 }, 00:22:15.758 "peer_address": { 00:22:15.758 "trtype": "TCP", 00:22:15.758 "adrfam": "IPv4", 00:22:15.758 "traddr": "10.0.0.1", 00:22:15.758 "trsvcid": "51296" 00:22:15.758 }, 00:22:15.758 "auth": { 00:22:15.758 "state": "completed", 00:22:15.758 "digest": "sha384", 00:22:15.758 "dhgroup": "ffdhe2048" 00:22:15.758 } 00:22:15.758 } 00:22:15.758 ]' 00:22:15.758 12:36:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.016 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.275 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:16.275 12:36:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.211 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.470 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.728 00:22:17.728 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.728 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.728 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.986 { 00:22:17.986 "cntlid": 61, 00:22:17.986 "qid": 0, 00:22:17.986 "state": "enabled", 00:22:17.986 "thread": "nvmf_tgt_poll_group_000", 00:22:17.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.986 "listen_address": { 00:22:17.986 "trtype": "TCP", 00:22:17.986 "adrfam": "IPv4", 00:22:17.986 "traddr": "10.0.0.2", 00:22:17.986 "trsvcid": "4420" 00:22:17.986 }, 00:22:17.986 "peer_address": { 00:22:17.986 "trtype": "TCP", 00:22:17.986 "adrfam": "IPv4", 00:22:17.986 "traddr": "10.0.0.1", 00:22:17.986 "trsvcid": "44232" 00:22:17.986 }, 00:22:17.986 "auth": { 00:22:17.986 "state": "completed", 00:22:17.986 "digest": "sha384", 00:22:17.986 "dhgroup": "ffdhe2048" 00:22:17.986 } 00:22:17.986 } 00:22:17.986 ]' 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.986 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.244 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.244 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.244 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.244 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.244 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.502 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:18.502 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:19.435 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.436 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.693 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.956 00:22:19.956 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.956 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.956 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.215 { 00:22:20.215 "cntlid": 63, 00:22:20.215 "qid": 0, 00:22:20.215 "state": "enabled", 00:22:20.215 "thread": "nvmf_tgt_poll_group_000", 00:22:20.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.215 "listen_address": { 00:22:20.215 "trtype": "TCP", 00:22:20.215 "adrfam": "IPv4", 00:22:20.215 "traddr": "10.0.0.2", 00:22:20.215 "trsvcid": "4420" 00:22:20.215 }, 00:22:20.215 "peer_address": { 00:22:20.215 "trtype": "TCP", 00:22:20.215 "adrfam": "IPv4", 00:22:20.215 "traddr": "10.0.0.1", 00:22:20.215 "trsvcid": "44268" 00:22:20.215 }, 00:22:20.215 "auth": { 00:22:20.215 "state": "completed", 00:22:20.215 "digest": "sha384", 00:22:20.215 "dhgroup": "ffdhe2048" 00:22:20.215 } 00:22:20.215 } 00:22:20.215 ]' 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:20.215 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.472 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.472 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.472 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.730 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:20.730 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:21.664 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:21.922 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:21.922 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.922 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.922 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:21.922 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:21.922 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.923 12:36:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.181 00:22:22.181 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.181 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.181 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.439 { 00:22:22.439 "cntlid": 65, 00:22:22.439 "qid": 0, 00:22:22.439 "state": "enabled", 00:22:22.439 "thread": "nvmf_tgt_poll_group_000", 00:22:22.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.439 "listen_address": { 00:22:22.439 "trtype": "TCP", 00:22:22.439 "adrfam": "IPv4", 00:22:22.439 "traddr": "10.0.0.2", 00:22:22.439 "trsvcid": "4420" 00:22:22.439 }, 00:22:22.439 "peer_address": { 00:22:22.439 "trtype": "TCP", 00:22:22.439 "adrfam": "IPv4", 00:22:22.439 "traddr": "10.0.0.1", 00:22:22.439 "trsvcid": "44294" 00:22:22.439 }, 00:22:22.439 "auth": { 00:22:22.439 "state": "completed", 00:22:22.439 "digest": "sha384", 00:22:22.439 "dhgroup": "ffdhe3072" 00:22:22.439 } 00:22:22.439 } 00:22:22.439 ]' 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:22.439 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.696 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.696 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.696 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.954 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:22.954 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:23.889 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.889 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.889 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.889 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.889 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.889 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.890 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.890 12:36:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.890 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.455 00:22:24.455 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.455 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.455 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.712 { 00:22:24.712 "cntlid": 67, 00:22:24.712 "qid": 0, 00:22:24.712 "state": "enabled", 00:22:24.712 "thread": "nvmf_tgt_poll_group_000", 00:22:24.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.712 "listen_address": { 00:22:24.712 "trtype": "TCP", 00:22:24.712 "adrfam": "IPv4", 00:22:24.712 "traddr": "10.0.0.2", 00:22:24.712 "trsvcid": "4420" 00:22:24.712 }, 00:22:24.712 "peer_address": { 00:22:24.712 "trtype": "TCP", 00:22:24.712 "adrfam": "IPv4", 00:22:24.712 "traddr": "10.0.0.1", 00:22:24.712 "trsvcid": "44326" 00:22:24.712 }, 00:22:24.712 "auth": { 00:22:24.712 "state": "completed", 00:22:24.712 "digest": "sha384", 00:22:24.712 "dhgroup": "ffdhe3072" 00:22:24.712 } 00:22:24.712 } 00:22:24.712 ]' 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.712 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:24.713 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.713 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.713 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.713 12:36:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.970 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:24.970 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.903 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.468 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.726 00:22:26.726 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.726 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.726 12:36:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.983 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.983 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.983 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.983 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.983 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.983 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.983 { 00:22:26.983 "cntlid": 69, 00:22:26.983 "qid": 0, 00:22:26.983 "state": "enabled", 00:22:26.983 "thread": "nvmf_tgt_poll_group_000", 00:22:26.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.984 "listen_address": { 00:22:26.984 "trtype": "TCP", 00:22:26.984 "adrfam": "IPv4", 00:22:26.984 "traddr": "10.0.0.2", 00:22:26.984 "trsvcid": "4420" 00:22:26.984 }, 00:22:26.984 "peer_address": { 00:22:26.984 "trtype": "TCP", 00:22:26.984 "adrfam": "IPv4", 00:22:26.984 "traddr": "10.0.0.1", 00:22:26.984 "trsvcid": "44368" 00:22:26.984 }, 00:22:26.984 "auth": { 00:22:26.984 "state": "completed", 00:22:26.984 "digest": "sha384", 00:22:26.984 "dhgroup": "ffdhe3072" 00:22:26.984 } 00:22:26.984 } 00:22:26.984 ]' 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.984 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.241 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:27.241 12:36:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.174 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.432 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.998 00:22:28.998 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.998 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.998 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.285 { 00:22:29.285 "cntlid": 71, 00:22:29.285 "qid": 0, 00:22:29.285 "state": "enabled", 00:22:29.285 "thread": "nvmf_tgt_poll_group_000", 00:22:29.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.285 "listen_address": { 00:22:29.285 "trtype": "TCP", 00:22:29.285 "adrfam": "IPv4", 00:22:29.285 "traddr": "10.0.0.2", 00:22:29.285 "trsvcid": "4420" 00:22:29.285 }, 00:22:29.285 "peer_address": { 00:22:29.285 "trtype": "TCP", 00:22:29.285 "adrfam": "IPv4", 00:22:29.285 "traddr": "10.0.0.1", 00:22:29.285 "trsvcid": "38972" 00:22:29.285 }, 00:22:29.285 "auth": { 00:22:29.285 "state": "completed", 00:22:29.285 "digest": "sha384", 00:22:29.285 "dhgroup": "ffdhe3072" 00:22:29.285 } 00:22:29.285 } 00:22:29.285 ]' 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.285 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.572 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:29.572 12:36:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:30.504 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.504 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:30.504 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.504 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.505 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.505 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:30.505 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.505 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:30.505 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.762 12:36:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.019 00:22:31.276 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.276 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.276 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.534 { 00:22:31.534 "cntlid": 73, 00:22:31.534 "qid": 0, 00:22:31.534 "state": "enabled", 00:22:31.534 "thread": "nvmf_tgt_poll_group_000", 00:22:31.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:31.534 "listen_address": { 00:22:31.534 "trtype": "TCP", 00:22:31.534 "adrfam": "IPv4", 00:22:31.534 "traddr": "10.0.0.2", 00:22:31.534 "trsvcid": "4420" 00:22:31.534 }, 00:22:31.534 "peer_address": { 00:22:31.534 "trtype": "TCP", 00:22:31.534 "adrfam": "IPv4", 00:22:31.534 "traddr": "10.0.0.1", 00:22:31.534 "trsvcid": "39002" 00:22:31.534 }, 00:22:31.534 "auth": { 00:22:31.534 "state": "completed", 00:22:31.534 "digest": "sha384", 00:22:31.534 "dhgroup": "ffdhe4096" 00:22:31.534 } 00:22:31.534 } 00:22:31.534 ]' 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.534 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.791 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:31.791 12:37:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.724 12:37:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.982 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.239 00:22:33.497 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.497 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.497 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.755 { 00:22:33.755 "cntlid": 75, 00:22:33.755 "qid": 0, 00:22:33.755 "state": "enabled", 00:22:33.755 "thread": "nvmf_tgt_poll_group_000", 00:22:33.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:33.755 "listen_address": { 00:22:33.755 "trtype": "TCP", 00:22:33.755 "adrfam": "IPv4", 00:22:33.755 "traddr": "10.0.0.2", 00:22:33.755 "trsvcid": "4420" 00:22:33.755 }, 00:22:33.755 "peer_address": { 00:22:33.755 "trtype": "TCP", 00:22:33.755 "adrfam": "IPv4", 00:22:33.755 "traddr": "10.0.0.1", 00:22:33.755 "trsvcid": "39026" 00:22:33.755 }, 00:22:33.755 "auth": { 00:22:33.755 "state": "completed", 00:22:33.755 "digest": "sha384", 00:22:33.755 "dhgroup": "ffdhe4096" 00:22:33.755 } 00:22:33.755 } 00:22:33.755 ]' 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.755 12:37:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.011 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:34.011 12:37:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.950 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.208 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.773 00:22:35.773 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.773 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.773 12:37:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.031 { 00:22:36.031 "cntlid": 77, 00:22:36.031 "qid": 0, 00:22:36.031 "state": "enabled", 00:22:36.031 "thread": "nvmf_tgt_poll_group_000", 00:22:36.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:36.031 "listen_address": { 00:22:36.031 "trtype": "TCP", 00:22:36.031 "adrfam": "IPv4", 00:22:36.031 "traddr": "10.0.0.2", 00:22:36.031 "trsvcid": "4420" 00:22:36.031 }, 00:22:36.031 "peer_address": { 00:22:36.031 "trtype": "TCP", 00:22:36.031 "adrfam": "IPv4", 00:22:36.031 "traddr": "10.0.0.1", 00:22:36.031 "trsvcid": "39058" 00:22:36.031 }, 00:22:36.031 "auth": { 00:22:36.031 "state": "completed", 00:22:36.031 "digest": "sha384", 00:22:36.031 "dhgroup": "ffdhe4096" 00:22:36.031 } 00:22:36.031 } 00:22:36.031 ]' 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.031 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.289 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:36.289 12:37:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:37.221 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:37.222 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.480 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.738 00:22:37.738 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.738 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.738 12:37:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.995 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.995 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.995 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.995 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.253 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.254 { 00:22:38.254 "cntlid": 79, 00:22:38.254 "qid": 0, 00:22:38.254 "state": "enabled", 00:22:38.254 "thread": "nvmf_tgt_poll_group_000", 00:22:38.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.254 "listen_address": { 00:22:38.254 "trtype": "TCP", 00:22:38.254 "adrfam": "IPv4", 00:22:38.254 "traddr": "10.0.0.2", 00:22:38.254 "trsvcid": "4420" 00:22:38.254 }, 00:22:38.254 "peer_address": { 00:22:38.254 "trtype": "TCP", 00:22:38.254 "adrfam": "IPv4", 00:22:38.254 "traddr": "10.0.0.1", 00:22:38.254 "trsvcid": "50832" 00:22:38.254 }, 00:22:38.254 "auth": { 00:22:38.254 "state": "completed", 00:22:38.254 "digest": "sha384", 00:22:38.254 "dhgroup": "ffdhe4096" 00:22:38.254 } 00:22:38.254 } 00:22:38.254 ]' 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.254 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.511 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:38.511 12:37:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.443 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.701 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.702 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.702 12:37:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.266 00:22:40.266 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.266 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.266 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.523 { 00:22:40.523 "cntlid": 81, 00:22:40.523 "qid": 0, 00:22:40.523 "state": "enabled", 00:22:40.523 "thread": "nvmf_tgt_poll_group_000", 00:22:40.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.523 "listen_address": { 00:22:40.523 "trtype": "TCP", 00:22:40.523 "adrfam": "IPv4", 00:22:40.523 "traddr": "10.0.0.2", 00:22:40.523 "trsvcid": "4420" 00:22:40.523 }, 00:22:40.523 "peer_address": { 00:22:40.523 "trtype": "TCP", 00:22:40.523 "adrfam": "IPv4", 00:22:40.523 "traddr": "10.0.0.1", 00:22:40.523 "trsvcid": "50852" 00:22:40.523 }, 00:22:40.523 "auth": { 00:22:40.523 "state": "completed", 00:22:40.523 "digest": "sha384", 00:22:40.523 "dhgroup": "ffdhe6144" 00:22:40.523 } 00:22:40.523 } 00:22:40.523 ]' 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:40.523 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.779 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.780 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.780 12:37:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.037 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:41.037 12:37:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.969 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.227 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.795 00:22:42.795 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.795 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.795 12:37:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.054 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.055 { 00:22:43.055 "cntlid": 83, 00:22:43.055 "qid": 0, 00:22:43.055 "state": "enabled", 00:22:43.055 "thread": "nvmf_tgt_poll_group_000", 00:22:43.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:43.055 "listen_address": { 00:22:43.055 "trtype": "TCP", 00:22:43.055 "adrfam": "IPv4", 00:22:43.055 "traddr": "10.0.0.2", 00:22:43.055 "trsvcid": "4420" 00:22:43.055 }, 00:22:43.055 "peer_address": { 00:22:43.055 "trtype": "TCP", 00:22:43.055 "adrfam": "IPv4", 00:22:43.055 "traddr": "10.0.0.1", 00:22:43.055 "trsvcid": "50884" 00:22:43.055 }, 00:22:43.055 "auth": { 00:22:43.055 "state": "completed", 00:22:43.055 "digest": "sha384", 00:22:43.055 "dhgroup": "ffdhe6144" 00:22:43.055 } 00:22:43.055 } 00:22:43.055 ]' 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.055 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.314 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:43.314 12:37:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.253 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.512 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.513 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.513 12:37:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.079 00:22:45.079 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.079 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.079 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.339 { 00:22:45.339 "cntlid": 85, 00:22:45.339 "qid": 0, 00:22:45.339 "state": "enabled", 00:22:45.339 "thread": "nvmf_tgt_poll_group_000", 00:22:45.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.339 "listen_address": { 00:22:45.339 "trtype": "TCP", 00:22:45.339 "adrfam": "IPv4", 00:22:45.339 "traddr": "10.0.0.2", 00:22:45.339 "trsvcid": "4420" 00:22:45.339 }, 00:22:45.339 "peer_address": { 00:22:45.339 "trtype": "TCP", 00:22:45.339 "adrfam": "IPv4", 00:22:45.339 "traddr": "10.0.0.1", 00:22:45.339 "trsvcid": "50908" 00:22:45.339 }, 00:22:45.339 "auth": { 00:22:45.339 "state": "completed", 00:22:45.339 "digest": "sha384", 00:22:45.339 "dhgroup": "ffdhe6144" 00:22:45.339 } 00:22:45.339 } 00:22:45.339 ]' 00:22:45.339 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.597 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.856 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:45.856 12:37:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:46.790 12:37:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.048 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.614 00:22:47.614 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.614 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.614 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.872 { 00:22:47.872 "cntlid": 87, 00:22:47.872 "qid": 0, 00:22:47.872 "state": "enabled", 00:22:47.872 "thread": "nvmf_tgt_poll_group_000", 00:22:47.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:47.872 "listen_address": { 00:22:47.872 "trtype": "TCP", 00:22:47.872 "adrfam": "IPv4", 00:22:47.872 "traddr": "10.0.0.2", 00:22:47.872 "trsvcid": "4420" 00:22:47.872 }, 00:22:47.872 "peer_address": { 00:22:47.872 "trtype": "TCP", 00:22:47.872 "adrfam": "IPv4", 00:22:47.872 "traddr": "10.0.0.1", 00:22:47.872 "trsvcid": "34284" 00:22:47.872 }, 00:22:47.872 "auth": { 00:22:47.872 "state": "completed", 00:22:47.872 "digest": "sha384", 00:22:47.872 "dhgroup": "ffdhe6144" 00:22:47.872 } 00:22:47.872 } 00:22:47.872 ]' 00:22:47.872 12:37:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.872 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.437 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:48.438 12:37:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:49.003 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:49.261 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.519 12:37:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.452 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.452 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.452 { 00:22:50.452 "cntlid": 89, 00:22:50.452 "qid": 0, 00:22:50.452 "state": "enabled", 00:22:50.452 "thread": "nvmf_tgt_poll_group_000", 00:22:50.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:50.452 "listen_address": { 00:22:50.452 "trtype": "TCP", 00:22:50.452 "adrfam": "IPv4", 00:22:50.452 "traddr": "10.0.0.2", 00:22:50.452 "trsvcid": "4420" 00:22:50.452 }, 00:22:50.452 "peer_address": { 00:22:50.452 "trtype": "TCP", 00:22:50.452 "adrfam": "IPv4", 00:22:50.452 "traddr": "10.0.0.1", 00:22:50.452 "trsvcid": "34314" 00:22:50.452 }, 00:22:50.452 "auth": { 00:22:50.452 "state": "completed", 00:22:50.452 "digest": "sha384", 00:22:50.452 "dhgroup": "ffdhe8192" 00:22:50.452 } 00:22:50.452 } 00:22:50.453 ]' 00:22:50.453 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.710 12:37:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.968 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:50.968 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:22:51.900 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:51.901 12:37:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:52.159 12:37:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.092 00:22:53.092 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.092 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.092 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.350 { 00:22:53.350 "cntlid": 91, 00:22:53.350 "qid": 0, 00:22:53.350 "state": "enabled", 00:22:53.350 "thread": "nvmf_tgt_poll_group_000", 00:22:53.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:53.350 "listen_address": { 00:22:53.350 "trtype": "TCP", 00:22:53.350 "adrfam": "IPv4", 00:22:53.350 "traddr": "10.0.0.2", 00:22:53.350 "trsvcid": "4420" 00:22:53.350 }, 00:22:53.350 "peer_address": { 00:22:53.350 "trtype": "TCP", 00:22:53.350 "adrfam": "IPv4", 00:22:53.350 "traddr": "10.0.0.1", 00:22:53.350 "trsvcid": "34342" 00:22:53.350 }, 00:22:53.350 "auth": { 00:22:53.350 "state": "completed", 00:22:53.350 "digest": "sha384", 00:22:53.350 "dhgroup": "ffdhe8192" 00:22:53.350 } 00:22:53.350 } 00:22:53.350 ]' 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.350 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.915 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:53.915 12:37:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.848 12:37:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.105 12:37:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.038 00:22:56.038 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.038 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.038 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.295 { 00:22:56.295 "cntlid": 93, 00:22:56.295 "qid": 0, 00:22:56.295 "state": "enabled", 00:22:56.295 "thread": "nvmf_tgt_poll_group_000", 00:22:56.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:56.295 "listen_address": { 00:22:56.295 "trtype": "TCP", 00:22:56.295 "adrfam": "IPv4", 00:22:56.295 "traddr": "10.0.0.2", 00:22:56.295 "trsvcid": "4420" 00:22:56.295 }, 00:22:56.295 "peer_address": { 00:22:56.295 "trtype": "TCP", 00:22:56.295 "adrfam": "IPv4", 00:22:56.295 "traddr": "10.0.0.1", 00:22:56.295 "trsvcid": "34364" 00:22:56.295 }, 00:22:56.295 "auth": { 00:22:56.295 "state": "completed", 00:22:56.295 "digest": "sha384", 00:22:56.295 "dhgroup": "ffdhe8192" 00:22:56.295 } 00:22:56.295 } 00:22:56.295 ]' 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.295 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.296 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.296 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.296 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.553 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:56.553 12:37:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:57.486 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:57.744 12:37:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:58.678 00:22:58.678 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.678 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.678 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.976 { 00:22:58.976 "cntlid": 95, 00:22:58.976 "qid": 0, 00:22:58.976 "state": "enabled", 00:22:58.976 "thread": "nvmf_tgt_poll_group_000", 00:22:58.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:58.976 "listen_address": { 00:22:58.976 "trtype": "TCP", 00:22:58.976 "adrfam": "IPv4", 00:22:58.976 "traddr": "10.0.0.2", 00:22:58.976 "trsvcid": "4420" 00:22:58.976 }, 00:22:58.976 "peer_address": { 00:22:58.976 "trtype": "TCP", 00:22:58.976 "adrfam": "IPv4", 00:22:58.976 "traddr": "10.0.0.1", 00:22:58.976 "trsvcid": "56146" 00:22:58.976 }, 00:22:58.976 "auth": { 00:22:58.976 "state": "completed", 00:22:58.976 "digest": "sha384", 00:22:58.976 "dhgroup": "ffdhe8192" 00:22:58.976 } 00:22:58.976 } 00:22:58.976 ]' 00:22:58.976 12:37:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.976 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.260 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:22:59.260 12:37:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:00.190 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.190 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:00.191 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.448 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.705 00:23:00.705 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.705 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.705 12:37:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.963 { 00:23:00.963 "cntlid": 97, 00:23:00.963 "qid": 0, 00:23:00.963 "state": "enabled", 00:23:00.963 "thread": "nvmf_tgt_poll_group_000", 00:23:00.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:00.963 "listen_address": { 00:23:00.963 "trtype": "TCP", 00:23:00.963 "adrfam": "IPv4", 00:23:00.963 "traddr": "10.0.0.2", 00:23:00.963 "trsvcid": "4420" 00:23:00.963 }, 00:23:00.963 "peer_address": { 00:23:00.963 "trtype": "TCP", 00:23:00.963 "adrfam": "IPv4", 00:23:00.963 "traddr": "10.0.0.1", 00:23:00.963 "trsvcid": "56168" 00:23:00.963 }, 00:23:00.963 "auth": { 00:23:00.963 "state": "completed", 00:23:00.963 "digest": "sha512", 00:23:00.963 "dhgroup": "null" 00:23:00.963 } 00:23:00.963 } 00:23:00.963 ]' 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:00.963 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.221 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.221 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.221 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.479 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:01.479 12:37:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:02.411 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.669 12:37:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.927 00:23:02.927 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.927 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.927 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.184 { 00:23:03.184 "cntlid": 99, 00:23:03.184 "qid": 0, 00:23:03.184 "state": "enabled", 00:23:03.184 "thread": "nvmf_tgt_poll_group_000", 00:23:03.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:03.184 "listen_address": { 00:23:03.184 "trtype": "TCP", 00:23:03.184 "adrfam": "IPv4", 00:23:03.184 "traddr": "10.0.0.2", 00:23:03.184 "trsvcid": "4420" 00:23:03.184 }, 00:23:03.184 "peer_address": { 00:23:03.184 "trtype": "TCP", 00:23:03.184 "adrfam": "IPv4", 00:23:03.184 "traddr": "10.0.0.1", 00:23:03.184 "trsvcid": "56206" 00:23:03.184 }, 00:23:03.184 "auth": { 00:23:03.184 "state": "completed", 00:23:03.184 "digest": "sha512", 00:23:03.184 "dhgroup": "null" 00:23:03.184 } 00:23:03.184 } 00:23:03.184 ]' 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:03.184 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.442 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.442 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.442 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.699 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:03.699 12:37:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.631 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.889 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.889 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.889 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.889 12:37:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.147 00:23:05.147 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.147 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.147 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.404 { 00:23:05.404 "cntlid": 101, 00:23:05.404 "qid": 0, 00:23:05.404 "state": "enabled", 00:23:05.404 "thread": "nvmf_tgt_poll_group_000", 00:23:05.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:05.404 "listen_address": { 00:23:05.404 "trtype": "TCP", 00:23:05.404 "adrfam": "IPv4", 00:23:05.404 "traddr": "10.0.0.2", 00:23:05.404 "trsvcid": "4420" 00:23:05.404 }, 00:23:05.404 "peer_address": { 00:23:05.404 "trtype": "TCP", 00:23:05.404 "adrfam": "IPv4", 00:23:05.404 "traddr": "10.0.0.1", 00:23:05.404 "trsvcid": "56248" 00:23:05.404 }, 00:23:05.404 "auth": { 00:23:05.404 "state": "completed", 00:23:05.404 "digest": "sha512", 00:23:05.404 "dhgroup": "null" 00:23:05.404 } 00:23:05.404 } 00:23:05.404 ]' 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.404 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.662 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:05.662 12:37:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:06.595 12:37:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.853 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.110 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.110 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:07.110 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.110 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.368 00:23:07.368 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.368 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.368 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.625 { 00:23:07.625 "cntlid": 103, 00:23:07.625 "qid": 0, 00:23:07.625 "state": "enabled", 00:23:07.625 "thread": "nvmf_tgt_poll_group_000", 00:23:07.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:07.625 "listen_address": { 00:23:07.625 "trtype": "TCP", 00:23:07.625 "adrfam": "IPv4", 00:23:07.625 "traddr": "10.0.0.2", 00:23:07.625 "trsvcid": "4420" 00:23:07.625 }, 00:23:07.625 "peer_address": { 00:23:07.625 "trtype": "TCP", 00:23:07.625 "adrfam": "IPv4", 00:23:07.625 "traddr": "10.0.0.1", 00:23:07.625 "trsvcid": "53098" 00:23:07.625 }, 00:23:07.625 "auth": { 00:23:07.625 "state": "completed", 00:23:07.625 "digest": "sha512", 00:23:07.625 "dhgroup": "null" 00:23:07.625 } 00:23:07.625 } 00:23:07.625 ]' 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.625 12:37:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.190 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:08.190 12:37:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.122 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.688 00:23:09.688 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.688 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.688 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.946 { 00:23:09.946 "cntlid": 105, 00:23:09.946 "qid": 0, 00:23:09.946 "state": "enabled", 00:23:09.946 "thread": "nvmf_tgt_poll_group_000", 00:23:09.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:09.946 "listen_address": { 00:23:09.946 "trtype": "TCP", 00:23:09.946 "adrfam": "IPv4", 00:23:09.946 "traddr": "10.0.0.2", 00:23:09.946 "trsvcid": "4420" 00:23:09.946 }, 00:23:09.946 "peer_address": { 00:23:09.946 "trtype": "TCP", 00:23:09.946 "adrfam": "IPv4", 00:23:09.946 "traddr": "10.0.0.1", 00:23:09.946 "trsvcid": "53130" 00:23:09.946 }, 00:23:09.946 "auth": { 00:23:09.946 "state": "completed", 00:23:09.946 "digest": "sha512", 00:23:09.946 "dhgroup": "ffdhe2048" 00:23:09.946 } 00:23:09.946 } 00:23:09.946 ]' 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.946 12:37:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.946 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:09.946 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.946 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.946 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.946 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.204 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:10.204 12:37:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.135 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.393 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.651 00:23:11.651 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.651 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.651 12:37:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.909 { 00:23:11.909 "cntlid": 107, 00:23:11.909 "qid": 0, 00:23:11.909 "state": "enabled", 00:23:11.909 "thread": "nvmf_tgt_poll_group_000", 00:23:11.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:11.909 "listen_address": { 00:23:11.909 "trtype": "TCP", 00:23:11.909 "adrfam": "IPv4", 00:23:11.909 "traddr": "10.0.0.2", 00:23:11.909 "trsvcid": "4420" 00:23:11.909 }, 00:23:11.909 "peer_address": { 00:23:11.909 "trtype": "TCP", 00:23:11.909 "adrfam": "IPv4", 00:23:11.909 "traddr": "10.0.0.1", 00:23:11.909 "trsvcid": "53158" 00:23:11.909 }, 00:23:11.909 "auth": { 00:23:11.909 "state": "completed", 00:23:11.909 "digest": "sha512", 00:23:11.909 "dhgroup": "ffdhe2048" 00:23:11.909 } 00:23:11.909 } 00:23:11.909 ]' 00:23:11.909 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.167 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.425 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:12.425 12:37:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:13.357 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.357 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.358 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.358 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.358 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.358 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.358 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.358 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.615 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.873 00:23:13.873 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.873 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.873 12:37:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.131 { 00:23:14.131 "cntlid": 109, 00:23:14.131 "qid": 0, 00:23:14.131 "state": "enabled", 00:23:14.131 "thread": "nvmf_tgt_poll_group_000", 00:23:14.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:14.131 "listen_address": { 00:23:14.131 "trtype": "TCP", 00:23:14.131 "adrfam": "IPv4", 00:23:14.131 "traddr": "10.0.0.2", 00:23:14.131 "trsvcid": "4420" 00:23:14.131 }, 00:23:14.131 "peer_address": { 00:23:14.131 "trtype": "TCP", 00:23:14.131 "adrfam": "IPv4", 00:23:14.131 "traddr": "10.0.0.1", 00:23:14.131 "trsvcid": "53186" 00:23:14.131 }, 00:23:14.131 "auth": { 00:23:14.131 "state": "completed", 00:23:14.131 "digest": "sha512", 00:23:14.131 "dhgroup": "ffdhe2048" 00:23:14.131 } 00:23:14.131 } 00:23:14.131 ]' 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:14.131 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.389 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.389 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.389 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.647 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:14.647 12:37:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.580 12:37:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:16.145 00:23:16.145 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.145 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.145 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.402 { 00:23:16.402 "cntlid": 111, 00:23:16.402 "qid": 0, 00:23:16.402 "state": "enabled", 00:23:16.402 "thread": "nvmf_tgt_poll_group_000", 00:23:16.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:16.402 "listen_address": { 00:23:16.402 "trtype": "TCP", 00:23:16.402 "adrfam": "IPv4", 00:23:16.402 "traddr": "10.0.0.2", 00:23:16.402 "trsvcid": "4420" 00:23:16.402 }, 00:23:16.402 "peer_address": { 00:23:16.402 "trtype": "TCP", 00:23:16.402 "adrfam": "IPv4", 00:23:16.402 "traddr": "10.0.0.1", 00:23:16.402 "trsvcid": "53210" 00:23:16.402 }, 00:23:16.402 "auth": { 00:23:16.402 "state": "completed", 00:23:16.402 "digest": "sha512", 00:23:16.402 "dhgroup": "ffdhe2048" 00:23:16.402 } 00:23:16.402 } 00:23:16.402 ]' 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.402 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.660 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:16.660 12:37:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:17.593 12:37:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.851 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.416 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.417 { 00:23:18.417 "cntlid": 113, 00:23:18.417 "qid": 0, 00:23:18.417 "state": "enabled", 00:23:18.417 "thread": "nvmf_tgt_poll_group_000", 00:23:18.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:18.417 "listen_address": { 00:23:18.417 "trtype": "TCP", 00:23:18.417 "adrfam": "IPv4", 00:23:18.417 "traddr": "10.0.0.2", 00:23:18.417 "trsvcid": "4420" 00:23:18.417 }, 00:23:18.417 "peer_address": { 00:23:18.417 "trtype": "TCP", 00:23:18.417 "adrfam": "IPv4", 00:23:18.417 "traddr": "10.0.0.1", 00:23:18.417 "trsvcid": "56906" 00:23:18.417 }, 00:23:18.417 "auth": { 00:23:18.417 "state": "completed", 00:23:18.417 "digest": "sha512", 00:23:18.417 "dhgroup": "ffdhe3072" 00:23:18.417 } 00:23:18.417 } 00:23:18.417 ]' 00:23:18.417 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.674 12:37:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.932 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:18.932 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:19.866 12:37:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.124 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.382 00:23:20.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.382 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.639 { 00:23:20.639 "cntlid": 115, 00:23:20.639 "qid": 0, 00:23:20.639 "state": "enabled", 00:23:20.639 "thread": "nvmf_tgt_poll_group_000", 00:23:20.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:20.639 "listen_address": { 00:23:20.639 "trtype": "TCP", 00:23:20.639 "adrfam": "IPv4", 00:23:20.639 "traddr": "10.0.0.2", 00:23:20.639 "trsvcid": "4420" 00:23:20.639 }, 00:23:20.639 "peer_address": { 00:23:20.639 "trtype": "TCP", 00:23:20.639 "adrfam": "IPv4", 00:23:20.639 "traddr": "10.0.0.1", 00:23:20.639 "trsvcid": "56918" 00:23:20.639 }, 00:23:20.639 "auth": { 00:23:20.639 "state": "completed", 00:23:20.639 "digest": "sha512", 00:23:20.639 "dhgroup": "ffdhe3072" 00:23:20.639 } 00:23:20.639 } 00:23:20.639 ]' 00:23:20.639 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.897 12:37:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.154 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:21.154 12:37:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.086 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.343 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.601 00:23:22.601 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.601 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.601 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.858 { 00:23:22.858 "cntlid": 117, 00:23:22.858 "qid": 0, 00:23:22.858 "state": "enabled", 00:23:22.858 "thread": "nvmf_tgt_poll_group_000", 00:23:22.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:22.858 "listen_address": { 00:23:22.858 "trtype": "TCP", 00:23:22.858 "adrfam": "IPv4", 00:23:22.858 "traddr": "10.0.0.2", 00:23:22.858 "trsvcid": "4420" 00:23:22.858 }, 00:23:22.858 "peer_address": { 00:23:22.858 "trtype": "TCP", 00:23:22.858 "adrfam": "IPv4", 00:23:22.858 "traddr": "10.0.0.1", 00:23:22.858 "trsvcid": "56950" 00:23:22.858 }, 00:23:22.858 "auth": { 00:23:22.858 "state": "completed", 00:23:22.858 "digest": "sha512", 00:23:22.858 "dhgroup": "ffdhe3072" 00:23:22.858 } 00:23:22.858 } 00:23:22.858 ]' 00:23:22.858 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.116 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.374 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:23.374 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.306 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:24.564 12:37:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:24.821 00:23:25.079 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.079 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.079 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.337 { 00:23:25.337 "cntlid": 119, 00:23:25.337 "qid": 0, 00:23:25.337 "state": "enabled", 00:23:25.337 "thread": "nvmf_tgt_poll_group_000", 00:23:25.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:25.337 "listen_address": { 00:23:25.337 "trtype": "TCP", 00:23:25.337 "adrfam": "IPv4", 00:23:25.337 "traddr": "10.0.0.2", 00:23:25.337 "trsvcid": "4420" 00:23:25.337 }, 00:23:25.337 "peer_address": { 00:23:25.337 "trtype": "TCP", 00:23:25.337 "adrfam": "IPv4", 00:23:25.337 "traddr": "10.0.0.1", 00:23:25.337 "trsvcid": "56990" 00:23:25.337 }, 00:23:25.337 "auth": { 00:23:25.337 "state": "completed", 00:23:25.337 "digest": "sha512", 00:23:25.337 "dhgroup": "ffdhe3072" 00:23:25.337 } 00:23:25.337 } 00:23:25.337 ]' 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.337 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.595 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:25.595 12:37:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.528 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.786 12:37:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.351 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.351 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.351 { 00:23:27.351 "cntlid": 121, 00:23:27.351 "qid": 0, 00:23:27.351 "state": "enabled", 00:23:27.351 "thread": "nvmf_tgt_poll_group_000", 00:23:27.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:27.351 "listen_address": { 00:23:27.351 "trtype": "TCP", 00:23:27.351 "adrfam": "IPv4", 00:23:27.351 "traddr": "10.0.0.2", 00:23:27.351 "trsvcid": "4420" 00:23:27.351 }, 00:23:27.351 "peer_address": { 00:23:27.351 "trtype": "TCP", 00:23:27.351 "adrfam": "IPv4", 00:23:27.351 "traddr": "10.0.0.1", 00:23:27.351 "trsvcid": "41460" 00:23:27.351 }, 00:23:27.351 "auth": { 00:23:27.351 "state": "completed", 00:23:27.351 "digest": "sha512", 00:23:27.351 "dhgroup": "ffdhe4096" 00:23:27.351 } 00:23:27.351 } 00:23:27.352 ]' 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.609 12:37:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.867 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:27.867 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.848 12:37:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.106 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.364 00:23:29.364 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.364 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.364 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.930 { 00:23:29.930 "cntlid": 123, 00:23:29.930 "qid": 0, 00:23:29.930 "state": "enabled", 00:23:29.930 "thread": "nvmf_tgt_poll_group_000", 00:23:29.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:29.930 "listen_address": { 00:23:29.930 "trtype": "TCP", 00:23:29.930 "adrfam": "IPv4", 00:23:29.930 "traddr": "10.0.0.2", 00:23:29.930 "trsvcid": "4420" 00:23:29.930 }, 00:23:29.930 "peer_address": { 00:23:29.930 "trtype": "TCP", 00:23:29.930 "adrfam": "IPv4", 00:23:29.930 "traddr": "10.0.0.1", 00:23:29.930 "trsvcid": "41482" 00:23:29.930 }, 00:23:29.930 "auth": { 00:23:29.930 "state": "completed", 00:23:29.930 "digest": "sha512", 00:23:29.930 "dhgroup": "ffdhe4096" 00:23:29.930 } 00:23:29.930 } 00:23:29.930 ]' 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.930 12:37:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.188 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:30.188 12:37:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:31.121 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.121 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:31.121 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.121 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.121 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.121 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:31.122 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.122 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.381 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.639 00:23:31.896 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.896 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.896 12:38:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.155 { 00:23:32.155 "cntlid": 125, 00:23:32.155 "qid": 0, 00:23:32.155 "state": "enabled", 00:23:32.155 "thread": "nvmf_tgt_poll_group_000", 00:23:32.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:32.155 "listen_address": { 00:23:32.155 "trtype": "TCP", 00:23:32.155 "adrfam": "IPv4", 00:23:32.155 "traddr": "10.0.0.2", 00:23:32.155 "trsvcid": "4420" 00:23:32.155 }, 00:23:32.155 "peer_address": { 00:23:32.155 "trtype": "TCP", 00:23:32.155 "adrfam": "IPv4", 00:23:32.155 "traddr": "10.0.0.1", 00:23:32.155 "trsvcid": "41514" 00:23:32.155 }, 00:23:32.155 "auth": { 00:23:32.155 "state": "completed", 00:23:32.155 "digest": "sha512", 00:23:32.155 "dhgroup": "ffdhe4096" 00:23:32.155 } 00:23:32.155 } 00:23:32.155 ]' 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.155 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.412 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:32.413 12:38:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.344 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:33.602 12:38:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:34.167 00:23:34.167 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.167 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.167 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.425 { 00:23:34.425 "cntlid": 127, 00:23:34.425 "qid": 0, 00:23:34.425 "state": "enabled", 00:23:34.425 "thread": "nvmf_tgt_poll_group_000", 00:23:34.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:34.425 "listen_address": { 00:23:34.425 "trtype": "TCP", 00:23:34.425 "adrfam": "IPv4", 00:23:34.425 "traddr": "10.0.0.2", 00:23:34.425 "trsvcid": "4420" 00:23:34.425 }, 00:23:34.425 "peer_address": { 00:23:34.425 "trtype": "TCP", 00:23:34.425 "adrfam": "IPv4", 00:23:34.425 "traddr": "10.0.0.1", 00:23:34.425 "trsvcid": "41540" 00:23:34.425 }, 00:23:34.425 "auth": { 00:23:34.425 "state": "completed", 00:23:34.425 "digest": "sha512", 00:23:34.425 "dhgroup": "ffdhe4096" 00:23:34.425 } 00:23:34.425 } 00:23:34.425 ]' 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.425 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.683 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:34.684 12:38:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.617 12:38:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.875 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.440 00:23:36.440 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.440 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.440 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.698 { 00:23:36.698 "cntlid": 129, 00:23:36.698 "qid": 0, 00:23:36.698 "state": "enabled", 00:23:36.698 "thread": "nvmf_tgt_poll_group_000", 00:23:36.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:36.698 "listen_address": { 00:23:36.698 "trtype": "TCP", 00:23:36.698 "adrfam": "IPv4", 00:23:36.698 "traddr": "10.0.0.2", 00:23:36.698 "trsvcid": "4420" 00:23:36.698 }, 00:23:36.698 "peer_address": { 00:23:36.698 "trtype": "TCP", 00:23:36.698 "adrfam": "IPv4", 00:23:36.698 "traddr": "10.0.0.1", 00:23:36.698 "trsvcid": "41564" 00:23:36.698 }, 00:23:36.698 "auth": { 00:23:36.698 "state": "completed", 00:23:36.698 "digest": "sha512", 00:23:36.698 "dhgroup": "ffdhe6144" 00:23:36.698 } 00:23:36.698 } 00:23:36.698 ]' 00:23:36.698 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.956 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.956 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.956 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:36.957 12:38:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.957 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.957 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.957 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.215 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:37.215 12:38:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.149 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.407 12:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.973 00:23:38.973 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:38.973 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:38.973 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.230 { 00:23:39.230 "cntlid": 131, 00:23:39.230 "qid": 0, 00:23:39.230 "state": "enabled", 00:23:39.230 "thread": "nvmf_tgt_poll_group_000", 00:23:39.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:39.230 "listen_address": { 00:23:39.230 "trtype": "TCP", 00:23:39.230 "adrfam": "IPv4", 00:23:39.230 "traddr": "10.0.0.2", 00:23:39.230 "trsvcid": "4420" 00:23:39.230 }, 00:23:39.230 "peer_address": { 00:23:39.230 "trtype": "TCP", 00:23:39.230 "adrfam": "IPv4", 00:23:39.230 "traddr": "10.0.0.1", 00:23:39.230 "trsvcid": "47870" 00:23:39.230 }, 00:23:39.230 "auth": { 00:23:39.230 "state": "completed", 00:23:39.230 "digest": "sha512", 00:23:39.230 "dhgroup": "ffdhe6144" 00:23:39.230 } 00:23:39.230 } 00:23:39.230 ]' 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.230 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.488 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:39.488 12:38:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.421 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.679 12:38:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.244 00:23:41.244 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.244 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.244 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.502 { 00:23:41.502 "cntlid": 133, 00:23:41.502 "qid": 0, 00:23:41.502 "state": "enabled", 00:23:41.502 "thread": "nvmf_tgt_poll_group_000", 00:23:41.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:41.502 "listen_address": { 00:23:41.502 "trtype": "TCP", 00:23:41.502 "adrfam": "IPv4", 00:23:41.502 "traddr": "10.0.0.2", 00:23:41.502 "trsvcid": "4420" 00:23:41.502 }, 00:23:41.502 "peer_address": { 00:23:41.502 "trtype": "TCP", 00:23:41.502 "adrfam": "IPv4", 00:23:41.502 "traddr": "10.0.0.1", 00:23:41.502 "trsvcid": "47886" 00:23:41.502 }, 00:23:41.502 "auth": { 00:23:41.502 "state": "completed", 00:23:41.502 "digest": "sha512", 00:23:41.502 "dhgroup": "ffdhe6144" 00:23:41.502 } 00:23:41.502 } 00:23:41.502 ]' 00:23:41.502 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.760 12:38:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.017 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:42.017 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:42.951 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.951 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:42.951 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.951 12:38:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.951 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.951 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.951 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.951 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.209 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.774 00:23:43.774 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.774 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.774 12:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:44.032 { 00:23:44.032 "cntlid": 135, 00:23:44.032 "qid": 0, 00:23:44.032 "state": "enabled", 00:23:44.032 "thread": "nvmf_tgt_poll_group_000", 00:23:44.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:44.032 "listen_address": { 00:23:44.032 "trtype": "TCP", 00:23:44.032 "adrfam": "IPv4", 00:23:44.032 "traddr": "10.0.0.2", 00:23:44.032 "trsvcid": "4420" 00:23:44.032 }, 00:23:44.032 "peer_address": { 00:23:44.032 "trtype": "TCP", 00:23:44.032 "adrfam": "IPv4", 00:23:44.032 "traddr": "10.0.0.1", 00:23:44.032 "trsvcid": "47912" 00:23:44.032 }, 00:23:44.032 "auth": { 00:23:44.032 "state": "completed", 00:23:44.032 "digest": "sha512", 00:23:44.032 "dhgroup": "ffdhe6144" 00:23:44.032 } 00:23:44.032 } 00:23:44.032 ]' 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:44.032 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:44.290 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:44.290 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:44.290 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.290 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.290 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.548 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:44.548 12:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:45.481 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.482 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.740 12:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.673 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:46.673 { 00:23:46.673 "cntlid": 137, 00:23:46.673 "qid": 0, 00:23:46.673 "state": "enabled", 00:23:46.673 "thread": "nvmf_tgt_poll_group_000", 00:23:46.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:46.673 "listen_address": { 00:23:46.673 "trtype": "TCP", 00:23:46.673 "adrfam": "IPv4", 00:23:46.673 "traddr": "10.0.0.2", 00:23:46.673 "trsvcid": "4420" 00:23:46.673 }, 00:23:46.673 "peer_address": { 00:23:46.673 "trtype": "TCP", 00:23:46.673 "adrfam": "IPv4", 00:23:46.673 "traddr": "10.0.0.1", 00:23:46.673 "trsvcid": "47944" 00:23:46.673 }, 00:23:46.673 "auth": { 00:23:46.673 "state": "completed", 00:23:46.673 "digest": "sha512", 00:23:46.673 "dhgroup": "ffdhe8192" 00:23:46.673 } 00:23:46.673 } 00:23:46.673 ]' 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:46.673 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:46.931 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:46.931 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:46.931 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.931 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.931 12:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.188 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:47.188 12:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.122 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.380 12:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.313 00:23:49.313 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:49.313 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:49.313 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.570 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.570 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.570 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.570 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.570 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.570 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.570 { 00:23:49.570 "cntlid": 139, 00:23:49.570 "qid": 0, 00:23:49.570 "state": "enabled", 00:23:49.571 "thread": "nvmf_tgt_poll_group_000", 00:23:49.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:49.571 "listen_address": { 00:23:49.571 "trtype": "TCP", 00:23:49.571 "adrfam": "IPv4", 00:23:49.571 "traddr": "10.0.0.2", 00:23:49.571 "trsvcid": "4420" 00:23:49.571 }, 00:23:49.571 "peer_address": { 00:23:49.571 "trtype": "TCP", 00:23:49.571 "adrfam": "IPv4", 00:23:49.571 "traddr": "10.0.0.1", 00:23:49.571 "trsvcid": "37480" 00:23:49.571 }, 00:23:49.571 "auth": { 00:23:49.571 "state": "completed", 00:23:49.571 "digest": "sha512", 00:23:49.571 "dhgroup": "ffdhe8192" 00:23:49.571 } 00:23:49.571 } 00:23:49.571 ]' 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.571 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.827 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:49.827 12:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: --dhchap-ctrl-secret DHHC-1:02:Mzk4NzU4NjdmMjgxZGZhYzIxMTkxYWFmMDI2ODZjZjdkMTdmZWZlNTRmMTVmYjEypl/P6Q==: 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:50.760 12:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.018 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.953 00:23:51.953 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:51.953 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:51.953 12:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:52.211 { 00:23:52.211 "cntlid": 141, 00:23:52.211 "qid": 0, 00:23:52.211 "state": "enabled", 00:23:52.211 "thread": "nvmf_tgt_poll_group_000", 00:23:52.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:52.211 "listen_address": { 00:23:52.211 "trtype": "TCP", 00:23:52.211 "adrfam": "IPv4", 00:23:52.211 "traddr": "10.0.0.2", 00:23:52.211 "trsvcid": "4420" 00:23:52.211 }, 00:23:52.211 "peer_address": { 00:23:52.211 "trtype": "TCP", 00:23:52.211 "adrfam": "IPv4", 00:23:52.211 "traddr": "10.0.0.1", 00:23:52.211 "trsvcid": "37502" 00:23:52.211 }, 00:23:52.211 "auth": { 00:23:52.211 "state": "completed", 00:23:52.211 "digest": "sha512", 00:23:52.211 "dhgroup": "ffdhe8192" 00:23:52.211 } 00:23:52.211 } 00:23:52.211 ]' 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.211 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.468 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:52.469 12:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:01:ZDQ3MWMzOWFiMjZhYzQ3MGM5MWU0MzEyYWI3Njg4Nzf9bGzK: 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.403 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:53.661 12:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:54.595 00:23:54.595 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:54.595 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:54.595 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.853 { 00:23:54.853 "cntlid": 143, 00:23:54.853 "qid": 0, 00:23:54.853 "state": "enabled", 00:23:54.853 "thread": "nvmf_tgt_poll_group_000", 00:23:54.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:54.853 "listen_address": { 00:23:54.853 "trtype": "TCP", 00:23:54.853 "adrfam": "IPv4", 00:23:54.853 "traddr": "10.0.0.2", 00:23:54.853 "trsvcid": "4420" 00:23:54.853 }, 00:23:54.853 "peer_address": { 00:23:54.853 "trtype": "TCP", 00:23:54.853 "adrfam": "IPv4", 00:23:54.853 "traddr": "10.0.0.1", 00:23:54.853 "trsvcid": "37532" 00:23:54.853 }, 00:23:54.853 "auth": { 00:23:54.853 "state": "completed", 00:23:54.853 "digest": "sha512", 00:23:54.853 "dhgroup": "ffdhe8192" 00:23:54.853 } 00:23:54.853 } 00:23:54.853 ]' 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:54.853 12:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.853 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.853 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.853 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.111 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:55.111 12:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:56.051 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.309 12:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.247 00:23:57.247 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:57.247 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:57.247 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:57.505 { 00:23:57.505 "cntlid": 145, 00:23:57.505 "qid": 0, 00:23:57.505 "state": "enabled", 00:23:57.505 "thread": "nvmf_tgt_poll_group_000", 00:23:57.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:57.505 "listen_address": { 00:23:57.505 "trtype": "TCP", 00:23:57.505 "adrfam": "IPv4", 00:23:57.505 "traddr": "10.0.0.2", 00:23:57.505 "trsvcid": "4420" 00:23:57.505 }, 00:23:57.505 "peer_address": { 00:23:57.505 "trtype": "TCP", 00:23:57.505 "adrfam": "IPv4", 00:23:57.505 "traddr": "10.0.0.1", 00:23:57.505 "trsvcid": "37552" 00:23:57.505 }, 00:23:57.505 "auth": { 00:23:57.505 "state": "completed", 00:23:57.505 "digest": "sha512", 00:23:57.505 "dhgroup": "ffdhe8192" 00:23:57.505 } 00:23:57.505 } 00:23:57.505 ]' 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:57.505 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.763 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:57.763 12:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZTQ3NmMxNmZjNDY4ZWVkMDQwNWEwN2UyNjVlMTNjZGE2MjcwMGE0YmFmYjJkMjYxoyfvkA==: --dhchap-ctrl-secret DHHC-1:03:NDk5MjA4YWQ4OWY3NzU1Nzc1MGNjNTRkNzA2Mzk3Y2Q5Mzc5NDdjNzViZjEwNmUxMTA4NjhkNjBkNTlmNDA2NEL2rvg=: 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:58.702 12:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:59.696 request: 00:23:59.696 { 00:23:59.696 "name": "nvme0", 00:23:59.696 "trtype": "tcp", 00:23:59.696 "traddr": "10.0.0.2", 00:23:59.696 "adrfam": "ipv4", 00:23:59.696 "trsvcid": "4420", 00:23:59.696 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:59.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:59.696 "prchk_reftag": false, 00:23:59.696 "prchk_guard": false, 00:23:59.696 "hdgst": false, 00:23:59.696 "ddgst": false, 00:23:59.696 "dhchap_key": "key2", 00:23:59.696 "allow_unrecognized_csi": false, 00:23:59.696 "method": "bdev_nvme_attach_controller", 00:23:59.696 "req_id": 1 00:23:59.696 } 00:23:59.696 Got JSON-RPC error response 00:23:59.696 response: 00:23:59.696 { 00:23:59.696 "code": -5, 00:23:59.696 "message": "Input/output error" 00:23:59.696 } 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.696 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:59.697 12:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:00.265 request: 00:24:00.265 { 00:24:00.265 "name": "nvme0", 00:24:00.265 "trtype": "tcp", 00:24:00.265 "traddr": "10.0.0.2", 00:24:00.265 "adrfam": "ipv4", 00:24:00.265 "trsvcid": "4420", 00:24:00.265 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:00.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:00.265 "prchk_reftag": false, 00:24:00.265 "prchk_guard": false, 00:24:00.265 "hdgst": false, 00:24:00.265 "ddgst": false, 00:24:00.265 "dhchap_key": "key1", 00:24:00.265 "dhchap_ctrlr_key": "ckey2", 00:24:00.265 "allow_unrecognized_csi": false, 00:24:00.265 "method": "bdev_nvme_attach_controller", 00:24:00.265 "req_id": 1 00:24:00.265 } 00:24:00.265 Got JSON-RPC error response 00:24:00.265 response: 00:24:00.265 { 00:24:00.265 "code": -5, 00:24:00.265 "message": "Input/output error" 00:24:00.265 } 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.265 12:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.202 request: 00:24:01.202 { 00:24:01.202 "name": "nvme0", 00:24:01.202 "trtype": "tcp", 00:24:01.202 "traddr": "10.0.0.2", 00:24:01.202 "adrfam": "ipv4", 00:24:01.202 "trsvcid": "4420", 00:24:01.202 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:01.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:01.202 "prchk_reftag": false, 00:24:01.202 "prchk_guard": false, 00:24:01.202 "hdgst": false, 00:24:01.202 "ddgst": false, 00:24:01.202 "dhchap_key": "key1", 00:24:01.202 "dhchap_ctrlr_key": "ckey1", 00:24:01.202 "allow_unrecognized_csi": false, 00:24:01.202 "method": "bdev_nvme_attach_controller", 00:24:01.202 "req_id": 1 00:24:01.202 } 00:24:01.202 Got JSON-RPC error response 00:24:01.202 response: 00:24:01.202 { 00:24:01.202 "code": -5, 00:24:01.202 "message": "Input/output error" 00:24:01.202 } 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 654243 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 654243 ']' 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 654243 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 654243 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 654243' 00:24:01.202 killing process with pid 654243 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 654243 00:24:01.202 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 654243 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=676580 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 676580 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 676580 ']' 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:01.461 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 676580 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 676580 ']' 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:01.719 12:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.977 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:01.977 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:24:01.977 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:01.977 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.977 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.977 null0 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.e0M 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.cev ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cev 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.wjI 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.QW5 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.QW5 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qZG 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Tcb ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tcb 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.oOL 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:02.236 12:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:03.614 nvme0n1 00:24:03.614 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.614 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.614 12:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.871 { 00:24:03.871 "cntlid": 1, 00:24:03.871 "qid": 0, 00:24:03.871 "state": "enabled", 00:24:03.871 "thread": "nvmf_tgt_poll_group_000", 00:24:03.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:03.871 "listen_address": { 00:24:03.871 "trtype": "TCP", 00:24:03.871 "adrfam": "IPv4", 00:24:03.871 "traddr": "10.0.0.2", 00:24:03.871 "trsvcid": "4420" 00:24:03.871 }, 00:24:03.871 "peer_address": { 00:24:03.871 "trtype": "TCP", 00:24:03.871 "adrfam": "IPv4", 00:24:03.871 "traddr": "10.0.0.1", 00:24:03.871 "trsvcid": "54602" 00:24:03.871 }, 00:24:03.871 "auth": { 00:24:03.871 "state": "completed", 00:24:03.871 "digest": "sha512", 00:24:03.871 "dhgroup": "ffdhe8192" 00:24:03.871 } 00:24:03.871 } 00:24:03.871 ]' 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.871 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:04.129 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:04.129 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:04.129 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:04.129 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:04.129 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.387 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:24:04.387 12:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:05.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:05.322 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:05.579 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:05.580 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:05.580 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.580 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:05.838 request: 00:24:05.838 { 00:24:05.838 "name": "nvme0", 00:24:05.838 "trtype": "tcp", 00:24:05.838 "traddr": "10.0.0.2", 00:24:05.838 "adrfam": "ipv4", 00:24:05.838 "trsvcid": "4420", 00:24:05.838 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:05.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:05.838 "prchk_reftag": false, 00:24:05.838 "prchk_guard": false, 00:24:05.838 "hdgst": false, 00:24:05.838 "ddgst": false, 00:24:05.838 "dhchap_key": "key3", 00:24:05.838 "allow_unrecognized_csi": false, 00:24:05.838 "method": "bdev_nvme_attach_controller", 00:24:05.838 "req_id": 1 00:24:05.838 } 00:24:05.838 Got JSON-RPC error response 00:24:05.838 response: 00:24:05.838 { 00:24:05.838 "code": -5, 00:24:05.838 "message": "Input/output error" 00:24:05.838 } 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:05.838 12:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:06.096 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:06.353 request: 00:24:06.353 { 00:24:06.353 "name": "nvme0", 00:24:06.353 "trtype": "tcp", 00:24:06.353 "traddr": "10.0.0.2", 00:24:06.353 "adrfam": "ipv4", 00:24:06.353 "trsvcid": "4420", 00:24:06.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:06.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:06.353 "prchk_reftag": false, 00:24:06.353 "prchk_guard": false, 00:24:06.353 "hdgst": false, 00:24:06.353 "ddgst": false, 00:24:06.353 "dhchap_key": "key3", 00:24:06.353 "allow_unrecognized_csi": false, 00:24:06.353 "method": "bdev_nvme_attach_controller", 00:24:06.353 "req_id": 1 00:24:06.353 } 00:24:06.353 Got JSON-RPC error response 00:24:06.353 response: 00:24:06.353 { 00:24:06.353 "code": -5, 00:24:06.353 "message": "Input/output error" 00:24:06.353 } 00:24:06.353 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:06.353 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:06.353 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:06.353 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:06.353 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:06.353 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:06.354 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:06.354 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.354 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.354 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:06.611 12:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:07.176 request: 00:24:07.176 { 00:24:07.176 "name": "nvme0", 00:24:07.176 "trtype": "tcp", 00:24:07.176 "traddr": "10.0.0.2", 00:24:07.176 "adrfam": "ipv4", 00:24:07.176 "trsvcid": "4420", 00:24:07.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:07.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:07.176 "prchk_reftag": false, 00:24:07.176 "prchk_guard": false, 00:24:07.176 "hdgst": false, 00:24:07.176 "ddgst": false, 00:24:07.176 "dhchap_key": "key0", 00:24:07.176 "dhchap_ctrlr_key": "key1", 00:24:07.176 "allow_unrecognized_csi": false, 00:24:07.176 "method": "bdev_nvme_attach_controller", 00:24:07.176 "req_id": 1 00:24:07.176 } 00:24:07.176 Got JSON-RPC error response 00:24:07.176 response: 00:24:07.176 { 00:24:07.176 "code": -5, 00:24:07.176 "message": "Input/output error" 00:24:07.176 } 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:07.176 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:07.433 nvme0n1 00:24:07.691 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:07.691 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.691 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:07.949 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.949 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:07.949 12:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:08.206 12:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:09.582 nvme0n1 00:24:09.582 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:09.582 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:09.582 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:09.840 12:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.099 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.099 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:24:10.099 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: --dhchap-ctrl-secret DHHC-1:03:ZTkxMmEyYjlhNDQ5NTczNzlhMDNlOWZjYTFkOWY0ZTNhYzVhM2JmMzY3YzdlZWVmMzllNWFkZTk1ZDE1ZjUwZRjZiA4=: 00:24:11.030 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:11.030 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:11.030 12:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:11.030 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:11.030 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:11.030 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:11.030 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:11.030 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.030 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:11.288 12:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:11.854 request: 00:24:11.854 { 00:24:11.854 "name": "nvme0", 00:24:11.854 "trtype": "tcp", 00:24:11.854 "traddr": "10.0.0.2", 00:24:11.854 "adrfam": "ipv4", 00:24:11.854 "trsvcid": "4420", 00:24:11.854 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:11.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:24:11.854 "prchk_reftag": false, 00:24:11.854 "prchk_guard": false, 00:24:11.854 "hdgst": false, 00:24:11.854 "ddgst": false, 00:24:11.854 "dhchap_key": "key1", 00:24:11.854 "allow_unrecognized_csi": false, 00:24:11.854 "method": "bdev_nvme_attach_controller", 00:24:11.854 "req_id": 1 00:24:11.854 } 00:24:11.854 Got JSON-RPC error response 00:24:11.854 response: 00:24:11.854 { 00:24:11.854 "code": -5, 00:24:11.854 "message": "Input/output error" 00:24:11.854 } 00:24:11.854 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:11.854 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:11.854 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:11.854 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:11.854 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:11.855 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:11.855 12:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:13.758 nvme0n1 00:24:13.758 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:13.758 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:13.758 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.758 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.758 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.758 12:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:14.016 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:14.274 nvme0n1 00:24:14.274 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:14.274 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:14.274 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:14.532 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.532 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:14.532 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: '' 2s 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: ]] 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjQzN2ExYzNhNDE5MTEyM2RiNzJjOTIxMWZjM2Y2ZTcXSV2r: 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:14.791 12:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:17.328 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:17.328 12:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: 2s 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: ]] 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWU3YmVhZDIxNGVmNjg4NzJkOWM1MWY2ODM4ZGExMDQxZGFlOTdlOTBkYzU2NmMzEc1FFg==: 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:17.328 12:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:19.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:19.230 12:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:20.611 nvme0n1 00:24:20.611 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:20.611 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.611 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.611 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.611 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:20.611 12:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:21.178 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:21.178 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:21.178 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:21.436 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:21.694 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:21.694 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.694 12:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:21.951 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:22.885 request: 00:24:22.885 { 00:24:22.885 "name": "nvme0", 00:24:22.885 "dhchap_key": "key1", 00:24:22.885 "dhchap_ctrlr_key": "key3", 00:24:22.885 "method": "bdev_nvme_set_keys", 00:24:22.885 "req_id": 1 00:24:22.885 } 00:24:22.885 Got JSON-RPC error response 00:24:22.885 response: 00:24:22.885 { 00:24:22.885 "code": -13, 00:24:22.885 "message": "Permission denied" 00:24:22.885 } 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.886 12:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:23.143 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:23.143 12:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:24.077 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:24.077 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:24.077 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:24.335 12:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:25.713 nvme0n1 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:25.713 12:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:26.649 request: 00:24:26.649 { 00:24:26.649 "name": "nvme0", 00:24:26.649 "dhchap_key": "key2", 00:24:26.649 "dhchap_ctrlr_key": "key0", 00:24:26.649 "method": "bdev_nvme_set_keys", 00:24:26.649 "req_id": 1 00:24:26.649 } 00:24:26.649 Got JSON-RPC error response 00:24:26.649 response: 00:24:26.649 { 00:24:26.649 "code": -13, 00:24:26.649 "message": "Permission denied" 00:24:26.649 } 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:26.649 12:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.908 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:26.908 12:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:27.844 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:27.844 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:27.844 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:28.409 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:28.409 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:28.409 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 654269 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 654269 ']' 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 654269 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 654269 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 654269' 00:24:28.410 killing process with pid 654269 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 654269 00:24:28.410 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 654269 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.670 rmmod nvme_tcp 00:24:28.670 rmmod nvme_fabrics 00:24:28.670 rmmod nvme_keyring 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 676580 ']' 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 676580 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 676580 ']' 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 676580 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 676580 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 676580' 00:24:28.670 killing process with pid 676580 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 676580 00:24:28.670 12:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 676580 00:24:28.928 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:28.928 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.929 12:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.e0M /tmp/spdk.key-sha256.wjI /tmp/spdk.key-sha384.qZG /tmp/spdk.key-sha512.oOL /tmp/spdk.key-sha512.cev /tmp/spdk.key-sha384.QW5 /tmp/spdk.key-sha256.Tcb '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:31.466 00:24:31.466 real 3m28.338s 00:24:31.466 user 8m9.257s 00:24:31.466 sys 0m27.302s 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.466 ************************************ 00:24:31.466 END TEST nvmf_auth_target 00:24:31.466 ************************************ 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.466 ************************************ 00:24:31.466 START TEST nvmf_bdevio_no_huge 00:24:31.466 ************************************ 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:31.466 * Looking for test storage... 00:24:31.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.466 --rc genhtml_branch_coverage=1 00:24:31.466 --rc genhtml_function_coverage=1 00:24:31.466 --rc genhtml_legend=1 00:24:31.466 --rc geninfo_all_blocks=1 00:24:31.466 --rc geninfo_unexecuted_blocks=1 00:24:31.466 00:24:31.466 ' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.466 --rc genhtml_branch_coverage=1 00:24:31.466 --rc genhtml_function_coverage=1 00:24:31.466 --rc genhtml_legend=1 00:24:31.466 --rc geninfo_all_blocks=1 00:24:31.466 --rc geninfo_unexecuted_blocks=1 00:24:31.466 00:24:31.466 ' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.466 --rc genhtml_branch_coverage=1 00:24:31.466 --rc genhtml_function_coverage=1 00:24:31.466 --rc genhtml_legend=1 00:24:31.466 --rc geninfo_all_blocks=1 00:24:31.466 --rc geninfo_unexecuted_blocks=1 00:24:31.466 00:24:31.466 ' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.466 --rc genhtml_branch_coverage=1 00:24:31.466 --rc genhtml_function_coverage=1 00:24:31.466 --rc genhtml_legend=1 00:24:31.466 --rc geninfo_all_blocks=1 00:24:31.466 --rc geninfo_unexecuted_blocks=1 00:24:31.466 00:24:31.466 ' 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.466 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.467 12:39:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.415 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:24:33.416 00:24:33.416 --- 10.0.0.2 ping statistics --- 00:24:33.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.416 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:24:33.416 00:24:33.416 --- 10.0.0.1 ping statistics --- 00:24:33.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.416 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=681876 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 681876 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 681876 ']' 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.416 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.416 [2024-11-05 12:39:02.509589] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:24:33.416 [2024-11-05 12:39:02.509673] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:33.416 [2024-11-05 12:39:02.590183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.416 [2024-11-05 12:39:02.634848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.416 [2024-11-05 12:39:02.634928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.416 [2024-11-05 12:39:02.634957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.416 [2024-11-05 12:39:02.634969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.416 [2024-11-05 12:39:02.634978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.416 [2024-11-05 12:39:02.636064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:33.416 [2024-11-05 12:39:02.636139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:33.416 [2024-11-05 12:39:02.636143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.416 [2024-11-05 12:39:02.636089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.674 [2024-11-05 12:39:02.779240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.674 Malloc0 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:33.674 [2024-11-05 12:39:02.817187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:33.674 { 00:24:33.674 "params": { 00:24:33.674 "name": "Nvme$subsystem", 00:24:33.674 "trtype": "$TEST_TRANSPORT", 00:24:33.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.674 "adrfam": "ipv4", 00:24:33.674 "trsvcid": "$NVMF_PORT", 00:24:33.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.674 "hdgst": ${hdgst:-false}, 00:24:33.674 "ddgst": ${ddgst:-false} 00:24:33.674 }, 00:24:33.674 "method": "bdev_nvme_attach_controller" 00:24:33.674 } 00:24:33.674 EOF 00:24:33.674 )") 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:24:33.674 12:39:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:33.674 "params": { 00:24:33.674 "name": "Nvme1", 00:24:33.674 "trtype": "tcp", 00:24:33.674 "traddr": "10.0.0.2", 00:24:33.674 "adrfam": "ipv4", 00:24:33.674 "trsvcid": "4420", 00:24:33.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.674 "hdgst": false, 00:24:33.674 "ddgst": false 00:24:33.674 }, 00:24:33.674 "method": "bdev_nvme_attach_controller" 00:24:33.675 }' 00:24:33.675 [2024-11-05 12:39:02.865732] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:24:33.675 [2024-11-05 12:39:02.865818] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid681918 ] 00:24:33.933 [2024-11-05 12:39:02.941190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:33.933 [2024-11-05 12:39:02.990276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.933 [2024-11-05 12:39:02.990325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.933 [2024-11-05 12:39:02.990329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.191 I/O targets: 00:24:34.191 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:34.191 00:24:34.191 00:24:34.191 CUnit - A unit testing framework for C - Version 2.1-3 00:24:34.191 http://cunit.sourceforge.net/ 00:24:34.191 00:24:34.191 00:24:34.191 Suite: bdevio tests on: Nvme1n1 00:24:34.191 Test: blockdev write read block ...passed 00:24:34.191 Test: blockdev write zeroes read block ...passed 00:24:34.191 Test: blockdev write zeroes read no split ...passed 00:24:34.191 Test: blockdev write zeroes read split ...passed 00:24:34.191 Test: blockdev write zeroes read split partial ...passed 00:24:34.191 Test: blockdev reset ...[2024-11-05 12:39:03.410636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:34.192 [2024-11-05 12:39:03.410757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17444b0 (9): Bad file descriptor 00:24:34.451 [2024-11-05 12:39:03.511847] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:34.451 passed 00:24:34.451 Test: blockdev write read 8 blocks ...passed 00:24:34.451 Test: blockdev write read size > 128k ...passed 00:24:34.451 Test: blockdev write read invalid size ...passed 00:24:34.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:34.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:34.451 Test: blockdev write read max offset ...passed 00:24:34.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:34.451 Test: blockdev writev readv 8 blocks ...passed 00:24:34.451 Test: blockdev writev readv 30 x 1block ...passed 00:24:34.710 Test: blockdev writev readv block ...passed 00:24:34.710 Test: blockdev writev readv size > 128k ...passed 00:24:34.710 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:34.710 Test: blockdev comparev and writev ...[2024-11-05 12:39:03.725760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.725797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.725823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.725841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.726170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.726195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.726218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.726234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.726558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.726582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.726611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.726629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.726962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.726986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.727008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:34.710 [2024-11-05 12:39:03.727025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.710 passed 00:24:34.710 Test: blockdev nvme passthru rw ...passed 00:24:34.710 Test: blockdev nvme passthru vendor specific ...[2024-11-05 12:39:03.811108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.710 [2024-11-05 12:39:03.811137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.811278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.710 [2024-11-05 12:39:03.811302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.811437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.710 [2024-11-05 12:39:03.811460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.710 [2024-11-05 12:39:03.811598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:34.710 [2024-11-05 12:39:03.811621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.710 passed 00:24:34.710 Test: blockdev nvme admin passthru ...passed 00:24:34.710 Test: blockdev copy ...passed 00:24:34.710 00:24:34.710 Run Summary: Type Total Ran Passed Failed Inactive 00:24:34.710 suites 1 1 n/a 0 0 00:24:34.710 tests 23 23 23 0 0 00:24:34.710 asserts 152 152 152 0 n/a 00:24:34.710 00:24:34.710 Elapsed time = 1.141 seconds 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.968 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.968 rmmod nvme_tcp 00:24:35.227 rmmod nvme_fabrics 00:24:35.227 rmmod nvme_keyring 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 681876 ']' 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 681876 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 681876 ']' 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 681876 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 681876 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 681876' 00:24:35.227 killing process with pid 681876 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 681876 00:24:35.227 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 681876 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.484 12:39:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.024 00:24:38.024 real 0m6.528s 00:24:38.024 user 0m10.854s 00:24:38.024 sys 0m2.563s 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 ************************************ 00:24:38.024 END TEST nvmf_bdevio_no_huge 00:24:38.024 ************************************ 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:38.024 ************************************ 00:24:38.024 START TEST nvmf_tls 00:24:38.024 ************************************ 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:38.024 * Looking for test storage... 00:24:38.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:24:38.024 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.025 --rc genhtml_branch_coverage=1 00:24:38.025 --rc genhtml_function_coverage=1 00:24:38.025 --rc genhtml_legend=1 00:24:38.025 --rc geninfo_all_blocks=1 00:24:38.025 --rc geninfo_unexecuted_blocks=1 00:24:38.025 00:24:38.025 ' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.025 --rc genhtml_branch_coverage=1 00:24:38.025 --rc genhtml_function_coverage=1 00:24:38.025 --rc genhtml_legend=1 00:24:38.025 --rc geninfo_all_blocks=1 00:24:38.025 --rc geninfo_unexecuted_blocks=1 00:24:38.025 00:24:38.025 ' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.025 --rc genhtml_branch_coverage=1 00:24:38.025 --rc genhtml_function_coverage=1 00:24:38.025 --rc genhtml_legend=1 00:24:38.025 --rc geninfo_all_blocks=1 00:24:38.025 --rc geninfo_unexecuted_blocks=1 00:24:38.025 00:24:38.025 ' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:38.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.025 --rc genhtml_branch_coverage=1 00:24:38.025 --rc genhtml_function_coverage=1 00:24:38.025 --rc genhtml_legend=1 00:24:38.025 --rc geninfo_all_blocks=1 00:24:38.025 --rc geninfo_unexecuted_blocks=1 00:24:38.025 00:24:38.025 ' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.025 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.026 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.026 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.026 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.026 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.026 12:39:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.924 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:39.925 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:39.925 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:39.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:39.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.925 12:39:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:24:39.925 00:24:39.925 --- 10.0.0.2 ping statistics --- 00:24:39.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.925 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:24:39.925 00:24:39.925 --- 10.0.0.1 ping statistics --- 00:24:39.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.925 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=684573 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 684573 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 684573 ']' 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.925 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.925 [2024-11-05 12:39:09.113442] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:24:39.925 [2024-11-05 12:39:09.113514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.183 [2024-11-05 12:39:09.190275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.183 [2024-11-05 12:39:09.236087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.183 [2024-11-05 12:39:09.236158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.183 [2024-11-05 12:39:09.236172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.183 [2024-11-05 12:39:09.236183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.183 [2024-11-05 12:39:09.236192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.183 [2024-11-05 12:39:09.236753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:40.183 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:40.440 true 00:24:40.440 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:40.440 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:40.697 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:40.697 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:40.697 12:39:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:41.260 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:41.260 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:41.517 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:41.517 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:41.517 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:41.774 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:41.774 12:39:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:42.032 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:42.032 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:42.032 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:42.032 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:42.290 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:42.290 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:42.290 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:42.547 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:42.547 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:42.804 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:42.804 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:42.804 12:39:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:43.061 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:43.061 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9hHd8IWT2o 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.2yGdlNp4Ol 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9hHd8IWT2o 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.2yGdlNp4Ol 00:24:43.319 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:43.884 12:39:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:44.141 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9hHd8IWT2o 00:24:44.141 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9hHd8IWT2o 00:24:44.141 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:44.399 [2024-11-05 12:39:13.501112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.399 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:44.656 12:39:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:44.913 [2024-11-05 12:39:14.094738] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.913 [2024-11-05 12:39:14.095068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.913 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:45.478 malloc0 00:24:45.478 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:45.736 12:39:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9hHd8IWT2o 00:24:45.993 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:46.250 12:39:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9hHd8IWT2o 00:24:56.210 Initializing NVMe Controllers 00:24:56.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:56.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:56.210 Initialization complete. Launching workers. 00:24:56.210 ======================================================== 00:24:56.210 Latency(us) 00:24:56.210 Device Information : IOPS MiB/s Average min max 00:24:56.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8717.49 34.05 7343.50 992.51 8747.42 00:24:56.210 ======================================================== 00:24:56.211 Total : 8717.49 34.05 7343.50 992.51 8747.42 00:24:56.211 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9hHd8IWT2o 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9hHd8IWT2o 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=686505 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 686505 /var/tmp/bdevperf.sock 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 686505 ']' 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.211 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.469 [2024-11-05 12:39:25.470754] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:24:56.469 [2024-11-05 12:39:25.470830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686505 ] 00:24:56.469 [2024-11-05 12:39:25.538840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.469 [2024-11-05 12:39:25.586237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.469 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.469 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:24:56.469 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9hHd8IWT2o 00:24:57.033 12:39:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:57.033 [2024-11-05 12:39:26.226992] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.291 TLSTESTn1 00:24:57.291 12:39:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:57.291 Running I/O for 10 seconds... 00:24:59.596 3424.00 IOPS, 13.38 MiB/s [2024-11-05T11:39:29.764Z] 3468.00 IOPS, 13.55 MiB/s [2024-11-05T11:39:30.696Z] 3434.33 IOPS, 13.42 MiB/s [2024-11-05T11:39:31.630Z] 3454.75 IOPS, 13.50 MiB/s [2024-11-05T11:39:32.563Z] 3469.60 IOPS, 13.55 MiB/s [2024-11-05T11:39:33.496Z] 3476.00 IOPS, 13.58 MiB/s [2024-11-05T11:39:34.870Z] 3483.14 IOPS, 13.61 MiB/s [2024-11-05T11:39:35.803Z] 3487.75 IOPS, 13.62 MiB/s [2024-11-05T11:39:36.737Z] 3491.67 IOPS, 13.64 MiB/s [2024-11-05T11:39:36.737Z] 3495.60 IOPS, 13.65 MiB/s 00:25:07.499 Latency(us) 00:25:07.499 [2024-11-05T11:39:36.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.499 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:07.499 Verification LBA range: start 0x0 length 0x2000 00:25:07.499 TLSTESTn1 : 10.02 3500.21 13.67 0.00 0.00 36504.06 9514.86 66798.17 00:25:07.499 [2024-11-05T11:39:36.737Z] =================================================================================================================== 00:25:07.499 [2024-11-05T11:39:36.737Z] Total : 3500.21 13.67 0.00 0.00 36504.06 9514.86 66798.17 00:25:07.499 { 00:25:07.499 "results": [ 00:25:07.499 { 00:25:07.499 "job": "TLSTESTn1", 00:25:07.499 "core_mask": "0x4", 00:25:07.499 "workload": "verify", 00:25:07.499 "status": "finished", 00:25:07.499 "verify_range": { 00:25:07.499 "start": 0, 00:25:07.499 "length": 8192 00:25:07.499 }, 00:25:07.499 "queue_depth": 128, 00:25:07.499 "io_size": 4096, 00:25:07.499 "runtime": 10.022534, 00:25:07.499 "iops": 3500.2126208801087, 00:25:07.499 "mibps": 13.672705550312925, 00:25:07.499 "io_failed": 0, 00:25:07.499 "io_timeout": 0, 00:25:07.499 "avg_latency_us": 36504.058541006154, 00:25:07.499 "min_latency_us": 9514.856296296297, 00:25:07.499 "max_latency_us": 66798.17481481482 00:25:07.499 } 00:25:07.499 ], 00:25:07.499 "core_count": 1 00:25:07.499 } 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 686505 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 686505 ']' 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 686505 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 686505 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 686505' 00:25:07.499 killing process with pid 686505 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 686505 00:25:07.499 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.499 00:25:07.499 Latency(us) 00:25:07.499 [2024-11-05T11:39:36.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.499 [2024-11-05T11:39:36.737Z] =================================================================================================================== 00:25:07.499 [2024-11-05T11:39:36.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 686505 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yGdlNp4Ol 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yGdlNp4Ol 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2yGdlNp4Ol 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2yGdlNp4Ol 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=687819 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:07.499 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 687819 /var/tmp/bdevperf.sock 00:25:07.500 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 687819 ']' 00:25:07.500 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.500 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:07.500 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.500 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:07.500 12:39:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.758 [2024-11-05 12:39:36.772754] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:07.758 [2024-11-05 12:39:36.772834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687819 ] 00:25:07.758 [2024-11-05 12:39:36.839675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.758 [2024-11-05 12:39:36.884336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.016 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.016 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:08.016 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2yGdlNp4Ol 00:25:08.274 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:08.532 [2024-11-05 12:39:37.540145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.532 [2024-11-05 12:39:37.545970] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:08.532 [2024-11-05 12:39:37.546515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd76e0 (107): Transport endpoint is not connected 00:25:08.532 [2024-11-05 12:39:37.547504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd76e0 (9): Bad file descriptor 00:25:08.532 [2024-11-05 12:39:37.548502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:08.532 [2024-11-05 12:39:37.548530] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:08.532 [2024-11-05 12:39:37.548544] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:08.532 [2024-11-05 12:39:37.548569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:08.532 request: 00:25:08.532 { 00:25:08.532 "name": "TLSTEST", 00:25:08.532 "trtype": "tcp", 00:25:08.532 "traddr": "10.0.0.2", 00:25:08.532 "adrfam": "ipv4", 00:25:08.532 "trsvcid": "4420", 00:25:08.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.532 "prchk_reftag": false, 00:25:08.532 "prchk_guard": false, 00:25:08.532 "hdgst": false, 00:25:08.532 "ddgst": false, 00:25:08.532 "psk": "key0", 00:25:08.532 "allow_unrecognized_csi": false, 00:25:08.532 "method": "bdev_nvme_attach_controller", 00:25:08.532 "req_id": 1 00:25:08.532 } 00:25:08.532 Got JSON-RPC error response 00:25:08.532 response: 00:25:08.532 { 00:25:08.532 "code": -5, 00:25:08.533 "message": "Input/output error" 00:25:08.533 } 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 687819 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 687819 ']' 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 687819 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 687819 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 687819' 00:25:08.533 killing process with pid 687819 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 687819 00:25:08.533 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.533 00:25:08.533 Latency(us) 00:25:08.533 [2024-11-05T11:39:37.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.533 [2024-11-05T11:39:37.771Z] =================================================================================================================== 00:25:08.533 [2024-11-05T11:39:37.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:08.533 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 687819 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9hHd8IWT2o 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9hHd8IWT2o 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9hHd8IWT2o 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9hHd8IWT2o 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=687960 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 687960 /var/tmp/bdevperf.sock 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 687960 ']' 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:08.791 12:39:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.791 [2024-11-05 12:39:37.856422] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:08.791 [2024-11-05 12:39:37.856518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687960 ] 00:25:08.791 [2024-11-05 12:39:37.923095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.791 [2024-11-05 12:39:37.967012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.049 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:09.049 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:09.049 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9hHd8IWT2o 00:25:09.307 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:09.565 [2024-11-05 12:39:38.617714] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.565 [2024-11-05 12:39:38.626433] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:09.565 [2024-11-05 12:39:38.626466] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:09.565 [2024-11-05 12:39:38.626516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:09.565 [2024-11-05 12:39:38.626998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7a6e0 (107): Transport endpoint is not connected 00:25:09.565 [2024-11-05 12:39:38.627988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7a6e0 (9): Bad file descriptor 00:25:09.565 [2024-11-05 12:39:38.628986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:25:09.565 [2024-11-05 12:39:38.629014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:09.565 [2024-11-05 12:39:38.629032] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:09.565 [2024-11-05 12:39:38.629052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:25:09.565 request: 00:25:09.565 { 00:25:09.565 "name": "TLSTEST", 00:25:09.565 "trtype": "tcp", 00:25:09.565 "traddr": "10.0.0.2", 00:25:09.565 "adrfam": "ipv4", 00:25:09.565 "trsvcid": "4420", 00:25:09.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.565 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:09.565 "prchk_reftag": false, 00:25:09.565 "prchk_guard": false, 00:25:09.565 "hdgst": false, 00:25:09.565 "ddgst": false, 00:25:09.565 "psk": "key0", 00:25:09.565 "allow_unrecognized_csi": false, 00:25:09.565 "method": "bdev_nvme_attach_controller", 00:25:09.565 "req_id": 1 00:25:09.565 } 00:25:09.565 Got JSON-RPC error response 00:25:09.565 response: 00:25:09.565 { 00:25:09.565 "code": -5, 00:25:09.565 "message": "Input/output error" 00:25:09.565 } 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 687960 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 687960 ']' 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 687960 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 687960 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 687960' 00:25:09.565 killing process with pid 687960 00:25:09.565 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 687960 00:25:09.565 Received shutdown signal, test time was about 10.000000 seconds 00:25:09.565 00:25:09.565 Latency(us) 00:25:09.565 [2024-11-05T11:39:38.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.565 [2024-11-05T11:39:38.803Z] =================================================================================================================== 00:25:09.565 [2024-11-05T11:39:38.804Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:09.566 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 687960 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9hHd8IWT2o 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9hHd8IWT2o 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9hHd8IWT2o 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9hHd8IWT2o 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688101 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688101 /var/tmp/bdevperf.sock 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 688101 ']' 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:09.824 12:39:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.824 [2024-11-05 12:39:38.928732] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:09.824 [2024-11-05 12:39:38.928824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688101 ] 00:25:09.824 [2024-11-05 12:39:38.995060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.824 [2024-11-05 12:39:39.037668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.082 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:10.082 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:10.082 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9hHd8IWT2o 00:25:10.340 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.602 [2024-11-05 12:39:39.685625] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.602 [2024-11-05 12:39:39.691074] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:10.602 [2024-11-05 12:39:39.691108] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:10.602 [2024-11-05 12:39:39.691162] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:10.602 [2024-11-05 12:39:39.691657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c76e0 (107): Transport endpoint is not connected 00:25:10.602 [2024-11-05 12:39:39.692645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c76e0 (9): Bad file descriptor 00:25:10.603 [2024-11-05 12:39:39.693644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:25:10.603 [2024-11-05 12:39:39.693665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:10.603 [2024-11-05 12:39:39.693678] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:10.603 [2024-11-05 12:39:39.693695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:25:10.603 request: 00:25:10.603 { 00:25:10.603 "name": "TLSTEST", 00:25:10.603 "trtype": "tcp", 00:25:10.603 "traddr": "10.0.0.2", 00:25:10.603 "adrfam": "ipv4", 00:25:10.603 "trsvcid": "4420", 00:25:10.603 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:10.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.603 "prchk_reftag": false, 00:25:10.603 "prchk_guard": false, 00:25:10.603 "hdgst": false, 00:25:10.603 "ddgst": false, 00:25:10.603 "psk": "key0", 00:25:10.603 "allow_unrecognized_csi": false, 00:25:10.603 "method": "bdev_nvme_attach_controller", 00:25:10.603 "req_id": 1 00:25:10.603 } 00:25:10.603 Got JSON-RPC error response 00:25:10.603 response: 00:25:10.603 { 00:25:10.603 "code": -5, 00:25:10.603 "message": "Input/output error" 00:25:10.603 } 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 688101 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 688101 ']' 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 688101 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 688101 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 688101' 00:25:10.603 killing process with pid 688101 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 688101 00:25:10.603 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.603 00:25:10.603 Latency(us) 00:25:10.603 [2024-11-05T11:39:39.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.603 [2024-11-05T11:39:39.841Z] =================================================================================================================== 00:25:10.603 [2024-11-05T11:39:39.841Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:10.603 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 688101 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688241 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688241 /var/tmp/bdevperf.sock 00:25:10.884 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 688241 ']' 00:25:10.885 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.885 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:10.885 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.885 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:10.885 12:39:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.885 [2024-11-05 12:39:39.956890] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:10.885 [2024-11-05 12:39:39.956965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688241 ] 00:25:10.885 [2024-11-05 12:39:40.025689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.885 [2024-11-05 12:39:40.076798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.193 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:11.193 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:11.193 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:25:11.478 [2024-11-05 12:39:40.468581] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:25:11.478 [2024-11-05 12:39:40.468641] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:11.478 request: 00:25:11.478 { 00:25:11.478 "name": "key0", 00:25:11.478 "path": "", 00:25:11.478 "method": "keyring_file_add_key", 00:25:11.478 "req_id": 1 00:25:11.478 } 00:25:11.478 Got JSON-RPC error response 00:25:11.478 response: 00:25:11.478 { 00:25:11.478 "code": -1, 00:25:11.478 "message": "Operation not permitted" 00:25:11.478 } 00:25:11.478 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:11.736 [2024-11-05 12:39:40.765455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.736 [2024-11-05 12:39:40.765510] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:11.736 request: 00:25:11.736 { 00:25:11.736 "name": "TLSTEST", 00:25:11.736 "trtype": "tcp", 00:25:11.736 "traddr": "10.0.0.2", 00:25:11.736 "adrfam": "ipv4", 00:25:11.736 "trsvcid": "4420", 00:25:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.736 "prchk_reftag": false, 00:25:11.736 "prchk_guard": false, 00:25:11.736 "hdgst": false, 00:25:11.736 "ddgst": false, 00:25:11.736 "psk": "key0", 00:25:11.736 "allow_unrecognized_csi": false, 00:25:11.736 "method": "bdev_nvme_attach_controller", 00:25:11.736 "req_id": 1 00:25:11.736 } 00:25:11.736 Got JSON-RPC error response 00:25:11.736 response: 00:25:11.736 { 00:25:11.736 "code": -126, 00:25:11.736 "message": "Required key not available" 00:25:11.736 } 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 688241 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 688241 ']' 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 688241 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 688241 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 688241' 00:25:11.736 killing process with pid 688241 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 688241 00:25:11.736 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.736 00:25:11.736 Latency(us) 00:25:11.736 [2024-11-05T11:39:40.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.736 [2024-11-05T11:39:40.974Z] =================================================================================================================== 00:25:11.736 [2024-11-05T11:39:40.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.736 12:39:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 688241 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 684573 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 684573 ']' 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 684573 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 684573 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 684573' 00:25:11.994 killing process with pid 684573 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 684573 00:25:11.994 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 684573 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.o9qx4O8mZV 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.o9qx4O8mZV 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=688398 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 688398 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 688398 ']' 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:12.253 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.253 [2024-11-05 12:39:41.353076] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:12.253 [2024-11-05 12:39:41.353151] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.253 [2024-11-05 12:39:41.423717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.253 [2024-11-05 12:39:41.467958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.253 [2024-11-05 12:39:41.468021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.253 [2024-11-05 12:39:41.468035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.253 [2024-11-05 12:39:41.468046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.253 [2024-11-05 12:39:41.468055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.253 [2024-11-05 12:39:41.468625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.o9qx4O8mZV 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o9qx4O8mZV 00:25:12.511 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:12.768 [2024-11-05 12:39:41.850628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.768 12:39:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:13.026 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:13.283 [2024-11-05 12:39:42.363973] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:13.283 [2024-11-05 12:39:42.364216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.283 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:13.540 malloc0 00:25:13.540 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:13.798 12:39:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:14.055 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9qx4O8mZV 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9qx4O8mZV 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=688683 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 688683 /var/tmp/bdevperf.sock 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 688683 ']' 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:14.313 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.313 [2024-11-05 12:39:43.490954] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:14.313 [2024-11-05 12:39:43.491031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688683 ] 00:25:14.571 [2024-11-05 12:39:43.561048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.571 [2024-11-05 12:39:43.607668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.571 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:14.571 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:14.571 12:39:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:14.828 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:15.086 [2024-11-05 12:39:44.241636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:15.086 TLSTESTn1 00:25:15.344 12:39:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:15.344 Running I/O for 10 seconds... 00:25:17.212 3211.00 IOPS, 12.54 MiB/s [2024-11-05T11:39:47.824Z] 3155.50 IOPS, 12.33 MiB/s [2024-11-05T11:39:48.756Z] 3202.67 IOPS, 12.51 MiB/s [2024-11-05T11:39:49.689Z] 3238.50 IOPS, 12.65 MiB/s [2024-11-05T11:39:50.621Z] 3232.00 IOPS, 12.62 MiB/s [2024-11-05T11:39:51.553Z] 3244.50 IOPS, 12.67 MiB/s [2024-11-05T11:39:52.486Z] 3247.43 IOPS, 12.69 MiB/s [2024-11-05T11:39:53.858Z] 3253.75 IOPS, 12.71 MiB/s [2024-11-05T11:39:54.792Z] 3249.00 IOPS, 12.69 MiB/s [2024-11-05T11:39:54.792Z] 3230.80 IOPS, 12.62 MiB/s 00:25:25.554 Latency(us) 00:25:25.554 [2024-11-05T11:39:54.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.554 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:25.554 Verification LBA range: start 0x0 length 0x2000 00:25:25.554 TLSTESTn1 : 10.04 3232.18 12.63 0.00 0.00 39523.84 9417.77 51263.72 00:25:25.554 [2024-11-05T11:39:54.792Z] =================================================================================================================== 00:25:25.554 [2024-11-05T11:39:54.792Z] Total : 3232.18 12.63 0.00 0.00 39523.84 9417.77 51263.72 00:25:25.554 { 00:25:25.554 "results": [ 00:25:25.554 { 00:25:25.554 "job": "TLSTESTn1", 00:25:25.554 "core_mask": "0x4", 00:25:25.554 "workload": "verify", 00:25:25.554 "status": "finished", 00:25:25.554 "verify_range": { 00:25:25.554 "start": 0, 00:25:25.554 "length": 8192 00:25:25.554 }, 00:25:25.554 "queue_depth": 128, 00:25:25.554 "io_size": 4096, 00:25:25.554 "runtime": 10.03503, 00:25:25.554 "iops": 3232.17768158142, 00:25:25.554 "mibps": 12.625694068677422, 00:25:25.554 "io_failed": 0, 00:25:25.554 "io_timeout": 0, 00:25:25.554 "avg_latency_us": 39523.84042386768, 00:25:25.554 "min_latency_us": 9417.765925925925, 00:25:25.554 "max_latency_us": 51263.71555555556 00:25:25.554 } 00:25:25.554 ], 00:25:25.554 "core_count": 1 00:25:25.554 } 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 688683 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 688683 ']' 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 688683 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 688683 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 688683' 00:25:25.554 killing process with pid 688683 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 688683 00:25:25.554 Received shutdown signal, test time was about 10.000000 seconds 00:25:25.554 00:25:25.554 Latency(us) 00:25:25.554 [2024-11-05T11:39:54.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.554 [2024-11-05T11:39:54.792Z] =================================================================================================================== 00:25:25.554 [2024-11-05T11:39:54.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 688683 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.o9qx4O8mZV 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9qx4O8mZV 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9qx4O8mZV 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o9qx4O8mZV 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:25.554 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.o9qx4O8mZV 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690017 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690017 /var/tmp/bdevperf.sock 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 690017 ']' 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:25.555 12:39:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:25.812 [2024-11-05 12:39:54.806942] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:25.812 [2024-11-05 12:39:54.807049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690017 ] 00:25:25.812 [2024-11-05 12:39:54.872894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.812 [2024-11-05 12:39:54.916938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.812 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:25.813 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:25.813 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:26.070 [2024-11-05 12:39:55.273925] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.o9qx4O8mZV': 0100666 00:25:26.070 [2024-11-05 12:39:55.273970] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:26.070 request: 00:25:26.070 { 00:25:26.070 "name": "key0", 00:25:26.070 "path": "/tmp/tmp.o9qx4O8mZV", 00:25:26.070 "method": "keyring_file_add_key", 00:25:26.070 "req_id": 1 00:25:26.070 } 00:25:26.070 Got JSON-RPC error response 00:25:26.070 response: 00:25:26.070 { 00:25:26.070 "code": -1, 00:25:26.070 "message": "Operation not permitted" 00:25:26.070 } 00:25:26.070 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:26.327 [2024-11-05 12:39:55.554757] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:26.327 [2024-11-05 12:39:55.554815] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:26.327 request: 00:25:26.327 { 00:25:26.327 "name": "TLSTEST", 00:25:26.327 "trtype": "tcp", 00:25:26.327 "traddr": "10.0.0.2", 00:25:26.327 "adrfam": "ipv4", 00:25:26.327 "trsvcid": "4420", 00:25:26.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:26.327 "prchk_reftag": false, 00:25:26.327 "prchk_guard": false, 00:25:26.327 "hdgst": false, 00:25:26.327 "ddgst": false, 00:25:26.327 "psk": "key0", 00:25:26.327 "allow_unrecognized_csi": false, 00:25:26.327 "method": "bdev_nvme_attach_controller", 00:25:26.327 "req_id": 1 00:25:26.327 } 00:25:26.327 Got JSON-RPC error response 00:25:26.327 response: 00:25:26.327 { 00:25:26.327 "code": -126, 00:25:26.327 "message": "Required key not available" 00:25:26.327 } 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 690017 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 690017 ']' 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 690017 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 690017 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 690017' 00:25:26.585 killing process with pid 690017 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 690017 00:25:26.585 Received shutdown signal, test time was about 10.000000 seconds 00:25:26.585 00:25:26.585 Latency(us) 00:25:26.585 [2024-11-05T11:39:55.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.585 [2024-11-05T11:39:55.823Z] =================================================================================================================== 00:25:26.585 [2024-11-05T11:39:55.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 690017 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 688398 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 688398 ']' 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 688398 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:26.585 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 688398 00:25:26.843 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:26.843 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:26.843 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 688398' 00:25:26.843 killing process with pid 688398 00:25:26.843 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 688398 00:25:26.843 12:39:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 688398 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=690165 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 690165 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 690165 ']' 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:26.843 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.101 [2024-11-05 12:39:56.110462] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:27.101 [2024-11-05 12:39:56.110554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.101 [2024-11-05 12:39:56.180935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.101 [2024-11-05 12:39:56.224125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.101 [2024-11-05 12:39:56.224199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.101 [2024-11-05 12:39:56.224223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.101 [2024-11-05 12:39:56.224235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.101 [2024-11-05 12:39:56.224252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.101 [2024-11-05 12:39:56.224803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.101 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:27.101 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:27.101 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.101 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:27.101 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.o9qx4O8mZV 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.o9qx4O8mZV 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.o9qx4O8mZV 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o9qx4O8mZV 00:25:27.358 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:27.616 [2024-11-05 12:39:56.619483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.616 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:27.873 12:39:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:28.130 [2024-11-05 12:39:57.144878] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.130 [2024-11-05 12:39:57.145127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.130 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:28.387 malloc0 00:25:28.387 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:28.644 12:39:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:28.901 [2024-11-05 12:39:58.081929] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.o9qx4O8mZV': 0100666 00:25:28.901 [2024-11-05 12:39:58.081982] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:28.901 request: 00:25:28.901 { 00:25:28.901 "name": "key0", 00:25:28.901 "path": "/tmp/tmp.o9qx4O8mZV", 00:25:28.901 "method": "keyring_file_add_key", 00:25:28.901 "req_id": 1 00:25:28.901 } 00:25:28.901 Got JSON-RPC error response 00:25:28.901 response: 00:25:28.901 { 00:25:28.901 "code": -1, 00:25:28.901 "message": "Operation not permitted" 00:25:28.901 } 00:25:28.901 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:29.467 [2024-11-05 12:39:58.406800] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:29.467 [2024-11-05 12:39:58.406892] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:29.467 request: 00:25:29.467 { 00:25:29.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.467 "host": "nqn.2016-06.io.spdk:host1", 00:25:29.467 "psk": "key0", 00:25:29.467 "method": "nvmf_subsystem_add_host", 00:25:29.467 "req_id": 1 00:25:29.467 } 00:25:29.467 Got JSON-RPC error response 00:25:29.467 response: 00:25:29.467 { 00:25:29.467 "code": -32603, 00:25:29.467 "message": "Internal error" 00:25:29.467 } 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 690165 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 690165 ']' 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 690165 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 690165 00:25:29.467 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 690165' 00:25:29.468 killing process with pid 690165 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 690165 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 690165 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.o9qx4O8mZV 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=690463 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 690463 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 690463 ']' 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:29.468 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.726 [2024-11-05 12:39:58.734093] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:29.726 [2024-11-05 12:39:58.734189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.726 [2024-11-05 12:39:58.808787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.726 [2024-11-05 12:39:58.855980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.726 [2024-11-05 12:39:58.856048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.726 [2024-11-05 12:39:58.856067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.726 [2024-11-05 12:39:58.856078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.726 [2024-11-05 12:39:58.856089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.726 [2024-11-05 12:39:58.856673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.726 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.o9qx4O8mZV 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o9qx4O8mZV 00:25:29.983 12:39:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:30.240 [2024-11-05 12:39:59.233382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.240 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:30.499 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:30.760 [2024-11-05 12:39:59.819013] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.760 [2024-11-05 12:39:59.819354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.760 12:39:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:31.017 malloc0 00:25:31.017 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:31.275 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:31.533 12:40:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=690777 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 690777 /var/tmp/bdevperf.sock 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 690777 ']' 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:31.790 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.048 [2024-11-05 12:40:01.061232] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:32.048 [2024-11-05 12:40:01.061317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690777 ] 00:25:32.048 [2024-11-05 12:40:01.131580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.048 [2024-11-05 12:40:01.176776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.305 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:32.305 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:32.305 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:32.562 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:32.818 [2024-11-05 12:40:01.824317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:32.818 TLSTESTn1 00:25:32.818 12:40:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:33.076 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:33.076 "subsystems": [ 00:25:33.076 { 00:25:33.076 "subsystem": "keyring", 00:25:33.076 "config": [ 00:25:33.076 { 00:25:33.076 "method": "keyring_file_add_key", 00:25:33.076 "params": { 00:25:33.076 "name": "key0", 00:25:33.076 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:33.076 } 00:25:33.076 } 00:25:33.076 ] 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "subsystem": "iobuf", 00:25:33.076 "config": [ 00:25:33.076 { 00:25:33.076 "method": "iobuf_set_options", 00:25:33.076 "params": { 00:25:33.076 "small_pool_count": 8192, 00:25:33.076 "large_pool_count": 1024, 00:25:33.076 "small_bufsize": 8192, 00:25:33.076 "large_bufsize": 135168, 00:25:33.076 "enable_numa": false 00:25:33.076 } 00:25:33.076 } 00:25:33.076 ] 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "subsystem": "sock", 00:25:33.076 "config": [ 00:25:33.076 { 00:25:33.076 "method": "sock_set_default_impl", 00:25:33.076 "params": { 00:25:33.076 "impl_name": "posix" 00:25:33.076 } 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "method": "sock_impl_set_options", 00:25:33.076 "params": { 00:25:33.076 "impl_name": "ssl", 00:25:33.076 "recv_buf_size": 4096, 00:25:33.076 "send_buf_size": 4096, 00:25:33.076 "enable_recv_pipe": true, 00:25:33.076 "enable_quickack": false, 00:25:33.076 "enable_placement_id": 0, 00:25:33.076 "enable_zerocopy_send_server": true, 00:25:33.076 "enable_zerocopy_send_client": false, 00:25:33.076 "zerocopy_threshold": 0, 00:25:33.076 "tls_version": 0, 00:25:33.076 "enable_ktls": false 00:25:33.076 } 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "method": "sock_impl_set_options", 00:25:33.076 "params": { 00:25:33.076 "impl_name": "posix", 00:25:33.076 "recv_buf_size": 2097152, 00:25:33.076 "send_buf_size": 2097152, 00:25:33.076 "enable_recv_pipe": true, 00:25:33.076 "enable_quickack": false, 00:25:33.076 "enable_placement_id": 0, 00:25:33.076 "enable_zerocopy_send_server": true, 00:25:33.076 "enable_zerocopy_send_client": false, 00:25:33.076 "zerocopy_threshold": 0, 00:25:33.076 "tls_version": 0, 00:25:33.076 "enable_ktls": false 00:25:33.076 } 00:25:33.076 } 00:25:33.076 ] 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "subsystem": "vmd", 00:25:33.076 "config": [] 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "subsystem": "accel", 00:25:33.076 "config": [ 00:25:33.076 { 00:25:33.076 "method": "accel_set_options", 00:25:33.076 "params": { 00:25:33.076 "small_cache_size": 128, 00:25:33.076 "large_cache_size": 16, 00:25:33.076 "task_count": 2048, 00:25:33.076 "sequence_count": 2048, 00:25:33.076 "buf_count": 2048 00:25:33.076 } 00:25:33.076 } 00:25:33.076 ] 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "subsystem": "bdev", 00:25:33.076 "config": [ 00:25:33.076 { 00:25:33.076 "method": "bdev_set_options", 00:25:33.076 "params": { 00:25:33.076 "bdev_io_pool_size": 65535, 00:25:33.076 "bdev_io_cache_size": 256, 00:25:33.076 "bdev_auto_examine": true, 00:25:33.076 "iobuf_small_cache_size": 128, 00:25:33.076 "iobuf_large_cache_size": 16 00:25:33.076 } 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "method": "bdev_raid_set_options", 00:25:33.076 "params": { 00:25:33.076 "process_window_size_kb": 1024, 00:25:33.076 "process_max_bandwidth_mb_sec": 0 00:25:33.076 } 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "method": "bdev_iscsi_set_options", 00:25:33.076 "params": { 00:25:33.076 "timeout_sec": 30 00:25:33.076 } 00:25:33.076 }, 00:25:33.076 { 00:25:33.076 "method": "bdev_nvme_set_options", 00:25:33.076 "params": { 00:25:33.076 "action_on_timeout": "none", 00:25:33.076 "timeout_us": 0, 00:25:33.076 "timeout_admin_us": 0, 00:25:33.077 "keep_alive_timeout_ms": 10000, 00:25:33.077 "arbitration_burst": 0, 00:25:33.077 "low_priority_weight": 0, 00:25:33.077 "medium_priority_weight": 0, 00:25:33.077 "high_priority_weight": 0, 00:25:33.077 "nvme_adminq_poll_period_us": 10000, 00:25:33.077 "nvme_ioq_poll_period_us": 0, 00:25:33.077 "io_queue_requests": 0, 00:25:33.077 "delay_cmd_submit": true, 00:25:33.077 "transport_retry_count": 4, 00:25:33.077 "bdev_retry_count": 3, 00:25:33.077 "transport_ack_timeout": 0, 00:25:33.077 "ctrlr_loss_timeout_sec": 0, 00:25:33.077 "reconnect_delay_sec": 0, 00:25:33.077 "fast_io_fail_timeout_sec": 0, 00:25:33.077 "disable_auto_failback": false, 00:25:33.077 "generate_uuids": false, 00:25:33.077 "transport_tos": 0, 00:25:33.077 "nvme_error_stat": false, 00:25:33.077 "rdma_srq_size": 0, 00:25:33.077 "io_path_stat": false, 00:25:33.077 "allow_accel_sequence": false, 00:25:33.077 "rdma_max_cq_size": 0, 00:25:33.077 "rdma_cm_event_timeout_ms": 0, 00:25:33.077 "dhchap_digests": [ 00:25:33.077 "sha256", 00:25:33.077 "sha384", 00:25:33.077 "sha512" 00:25:33.077 ], 00:25:33.077 "dhchap_dhgroups": [ 00:25:33.077 "null", 00:25:33.077 "ffdhe2048", 00:25:33.077 "ffdhe3072", 00:25:33.077 "ffdhe4096", 00:25:33.077 "ffdhe6144", 00:25:33.077 "ffdhe8192" 00:25:33.077 ] 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "bdev_nvme_set_hotplug", 00:25:33.077 "params": { 00:25:33.077 "period_us": 100000, 00:25:33.077 "enable": false 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "bdev_malloc_create", 00:25:33.077 "params": { 00:25:33.077 "name": "malloc0", 00:25:33.077 "num_blocks": 8192, 00:25:33.077 "block_size": 4096, 00:25:33.077 "physical_block_size": 4096, 00:25:33.077 "uuid": "735d8de3-1f81-46fd-9908-180acd68368e", 00:25:33.077 "optimal_io_boundary": 0, 00:25:33.077 "md_size": 0, 00:25:33.077 "dif_type": 0, 00:25:33.077 "dif_is_head_of_md": false, 00:25:33.077 "dif_pi_format": 0 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "bdev_wait_for_examine" 00:25:33.077 } 00:25:33.077 ] 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "subsystem": "nbd", 00:25:33.077 "config": [] 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "subsystem": "scheduler", 00:25:33.077 "config": [ 00:25:33.077 { 00:25:33.077 "method": "framework_set_scheduler", 00:25:33.077 "params": { 00:25:33.077 "name": "static" 00:25:33.077 } 00:25:33.077 } 00:25:33.077 ] 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "subsystem": "nvmf", 00:25:33.077 "config": [ 00:25:33.077 { 00:25:33.077 "method": "nvmf_set_config", 00:25:33.077 "params": { 00:25:33.077 "discovery_filter": "match_any", 00:25:33.077 "admin_cmd_passthru": { 00:25:33.077 "identify_ctrlr": false 00:25:33.077 }, 00:25:33.077 "dhchap_digests": [ 00:25:33.077 "sha256", 00:25:33.077 "sha384", 00:25:33.077 "sha512" 00:25:33.077 ], 00:25:33.077 "dhchap_dhgroups": [ 00:25:33.077 "null", 00:25:33.077 "ffdhe2048", 00:25:33.077 "ffdhe3072", 00:25:33.077 "ffdhe4096", 00:25:33.077 "ffdhe6144", 00:25:33.077 "ffdhe8192" 00:25:33.077 ] 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_set_max_subsystems", 00:25:33.077 "params": { 00:25:33.077 "max_subsystems": 1024 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_set_crdt", 00:25:33.077 "params": { 00:25:33.077 "crdt1": 0, 00:25:33.077 "crdt2": 0, 00:25:33.077 "crdt3": 0 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_create_transport", 00:25:33.077 "params": { 00:25:33.077 "trtype": "TCP", 00:25:33.077 "max_queue_depth": 128, 00:25:33.077 "max_io_qpairs_per_ctrlr": 127, 00:25:33.077 "in_capsule_data_size": 4096, 00:25:33.077 "max_io_size": 131072, 00:25:33.077 "io_unit_size": 131072, 00:25:33.077 "max_aq_depth": 128, 00:25:33.077 "num_shared_buffers": 511, 00:25:33.077 "buf_cache_size": 4294967295, 00:25:33.077 "dif_insert_or_strip": false, 00:25:33.077 "zcopy": false, 00:25:33.077 "c2h_success": false, 00:25:33.077 "sock_priority": 0, 00:25:33.077 "abort_timeout_sec": 1, 00:25:33.077 "ack_timeout": 0, 00:25:33.077 "data_wr_pool_size": 0 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_create_subsystem", 00:25:33.077 "params": { 00:25:33.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.077 "allow_any_host": false, 00:25:33.077 "serial_number": "SPDK00000000000001", 00:25:33.077 "model_number": "SPDK bdev Controller", 00:25:33.077 "max_namespaces": 10, 00:25:33.077 "min_cntlid": 1, 00:25:33.077 "max_cntlid": 65519, 00:25:33.077 "ana_reporting": false 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_subsystem_add_host", 00:25:33.077 "params": { 00:25:33.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.077 "host": "nqn.2016-06.io.spdk:host1", 00:25:33.077 "psk": "key0" 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_subsystem_add_ns", 00:25:33.077 "params": { 00:25:33.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.077 "namespace": { 00:25:33.077 "nsid": 1, 00:25:33.077 "bdev_name": "malloc0", 00:25:33.077 "nguid": "735D8DE31F8146FD9908180ACD68368E", 00:25:33.077 "uuid": "735d8de3-1f81-46fd-9908-180acd68368e", 00:25:33.077 "no_auto_visible": false 00:25:33.077 } 00:25:33.077 } 00:25:33.077 }, 00:25:33.077 { 00:25:33.077 "method": "nvmf_subsystem_add_listener", 00:25:33.077 "params": { 00:25:33.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.077 "listen_address": { 00:25:33.077 "trtype": "TCP", 00:25:33.077 "adrfam": "IPv4", 00:25:33.077 "traddr": "10.0.0.2", 00:25:33.077 "trsvcid": "4420" 00:25:33.077 }, 00:25:33.077 "secure_channel": true 00:25:33.077 } 00:25:33.077 } 00:25:33.077 ] 00:25:33.077 } 00:25:33.077 ] 00:25:33.077 }' 00:25:33.077 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:33.642 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:33.642 "subsystems": [ 00:25:33.642 { 00:25:33.642 "subsystem": "keyring", 00:25:33.642 "config": [ 00:25:33.642 { 00:25:33.642 "method": "keyring_file_add_key", 00:25:33.642 "params": { 00:25:33.642 "name": "key0", 00:25:33.642 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:33.642 } 00:25:33.642 } 00:25:33.642 ] 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "subsystem": "iobuf", 00:25:33.642 "config": [ 00:25:33.642 { 00:25:33.642 "method": "iobuf_set_options", 00:25:33.642 "params": { 00:25:33.642 "small_pool_count": 8192, 00:25:33.642 "large_pool_count": 1024, 00:25:33.642 "small_bufsize": 8192, 00:25:33.642 "large_bufsize": 135168, 00:25:33.642 "enable_numa": false 00:25:33.642 } 00:25:33.642 } 00:25:33.642 ] 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "subsystem": "sock", 00:25:33.642 "config": [ 00:25:33.642 { 00:25:33.642 "method": "sock_set_default_impl", 00:25:33.642 "params": { 00:25:33.642 "impl_name": "posix" 00:25:33.642 } 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "method": "sock_impl_set_options", 00:25:33.642 "params": { 00:25:33.642 "impl_name": "ssl", 00:25:33.642 "recv_buf_size": 4096, 00:25:33.642 "send_buf_size": 4096, 00:25:33.642 "enable_recv_pipe": true, 00:25:33.642 "enable_quickack": false, 00:25:33.642 "enable_placement_id": 0, 00:25:33.642 "enable_zerocopy_send_server": true, 00:25:33.642 "enable_zerocopy_send_client": false, 00:25:33.642 "zerocopy_threshold": 0, 00:25:33.642 "tls_version": 0, 00:25:33.642 "enable_ktls": false 00:25:33.642 } 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "method": "sock_impl_set_options", 00:25:33.642 "params": { 00:25:33.642 "impl_name": "posix", 00:25:33.642 "recv_buf_size": 2097152, 00:25:33.642 "send_buf_size": 2097152, 00:25:33.642 "enable_recv_pipe": true, 00:25:33.642 "enable_quickack": false, 00:25:33.642 "enable_placement_id": 0, 00:25:33.642 "enable_zerocopy_send_server": true, 00:25:33.642 "enable_zerocopy_send_client": false, 00:25:33.642 "zerocopy_threshold": 0, 00:25:33.642 "tls_version": 0, 00:25:33.642 "enable_ktls": false 00:25:33.642 } 00:25:33.642 } 00:25:33.642 ] 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "subsystem": "vmd", 00:25:33.642 "config": [] 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "subsystem": "accel", 00:25:33.642 "config": [ 00:25:33.642 { 00:25:33.642 "method": "accel_set_options", 00:25:33.642 "params": { 00:25:33.642 "small_cache_size": 128, 00:25:33.642 "large_cache_size": 16, 00:25:33.642 "task_count": 2048, 00:25:33.642 "sequence_count": 2048, 00:25:33.642 "buf_count": 2048 00:25:33.642 } 00:25:33.642 } 00:25:33.642 ] 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "subsystem": "bdev", 00:25:33.642 "config": [ 00:25:33.642 { 00:25:33.642 "method": "bdev_set_options", 00:25:33.642 "params": { 00:25:33.642 "bdev_io_pool_size": 65535, 00:25:33.642 "bdev_io_cache_size": 256, 00:25:33.642 "bdev_auto_examine": true, 00:25:33.642 "iobuf_small_cache_size": 128, 00:25:33.642 "iobuf_large_cache_size": 16 00:25:33.642 } 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "method": "bdev_raid_set_options", 00:25:33.642 "params": { 00:25:33.642 "process_window_size_kb": 1024, 00:25:33.642 "process_max_bandwidth_mb_sec": 0 00:25:33.642 } 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "method": "bdev_iscsi_set_options", 00:25:33.642 "params": { 00:25:33.642 "timeout_sec": 30 00:25:33.642 } 00:25:33.642 }, 00:25:33.642 { 00:25:33.642 "method": "bdev_nvme_set_options", 00:25:33.642 "params": { 00:25:33.642 "action_on_timeout": "none", 00:25:33.642 "timeout_us": 0, 00:25:33.642 "timeout_admin_us": 0, 00:25:33.642 "keep_alive_timeout_ms": 10000, 00:25:33.642 "arbitration_burst": 0, 00:25:33.642 "low_priority_weight": 0, 00:25:33.642 "medium_priority_weight": 0, 00:25:33.642 "high_priority_weight": 0, 00:25:33.642 "nvme_adminq_poll_period_us": 10000, 00:25:33.642 "nvme_ioq_poll_period_us": 0, 00:25:33.642 "io_queue_requests": 512, 00:25:33.642 "delay_cmd_submit": true, 00:25:33.642 "transport_retry_count": 4, 00:25:33.642 "bdev_retry_count": 3, 00:25:33.642 "transport_ack_timeout": 0, 00:25:33.642 "ctrlr_loss_timeout_sec": 0, 00:25:33.642 "reconnect_delay_sec": 0, 00:25:33.642 "fast_io_fail_timeout_sec": 0, 00:25:33.642 "disable_auto_failback": false, 00:25:33.642 "generate_uuids": false, 00:25:33.642 "transport_tos": 0, 00:25:33.642 "nvme_error_stat": false, 00:25:33.642 "rdma_srq_size": 0, 00:25:33.642 "io_path_stat": false, 00:25:33.642 "allow_accel_sequence": false, 00:25:33.642 "rdma_max_cq_size": 0, 00:25:33.642 "rdma_cm_event_timeout_ms": 0, 00:25:33.642 "dhchap_digests": [ 00:25:33.642 "sha256", 00:25:33.642 "sha384", 00:25:33.642 "sha512" 00:25:33.642 ], 00:25:33.642 "dhchap_dhgroups": [ 00:25:33.642 "null", 00:25:33.643 "ffdhe2048", 00:25:33.643 "ffdhe3072", 00:25:33.643 "ffdhe4096", 00:25:33.643 "ffdhe6144", 00:25:33.643 "ffdhe8192" 00:25:33.643 ] 00:25:33.643 } 00:25:33.643 }, 00:25:33.643 { 00:25:33.643 "method": "bdev_nvme_attach_controller", 00:25:33.643 "params": { 00:25:33.643 "name": "TLSTEST", 00:25:33.643 "trtype": "TCP", 00:25:33.643 "adrfam": "IPv4", 00:25:33.643 "traddr": "10.0.0.2", 00:25:33.643 "trsvcid": "4420", 00:25:33.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.643 "prchk_reftag": false, 00:25:33.643 "prchk_guard": false, 00:25:33.643 "ctrlr_loss_timeout_sec": 0, 00:25:33.643 "reconnect_delay_sec": 0, 00:25:33.643 "fast_io_fail_timeout_sec": 0, 00:25:33.643 "psk": "key0", 00:25:33.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.643 "hdgst": false, 00:25:33.643 "ddgst": false, 00:25:33.643 "multipath": "multipath" 00:25:33.643 } 00:25:33.643 }, 00:25:33.643 { 00:25:33.643 "method": "bdev_nvme_set_hotplug", 00:25:33.643 "params": { 00:25:33.643 "period_us": 100000, 00:25:33.643 "enable": false 00:25:33.643 } 00:25:33.643 }, 00:25:33.643 { 00:25:33.643 "method": "bdev_wait_for_examine" 00:25:33.643 } 00:25:33.643 ] 00:25:33.643 }, 00:25:33.643 { 00:25:33.643 "subsystem": "nbd", 00:25:33.643 "config": [] 00:25:33.643 } 00:25:33.643 ] 00:25:33.643 }' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 690777 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 690777 ']' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 690777 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 690777 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 690777' 00:25:33.643 killing process with pid 690777 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 690777 00:25:33.643 Received shutdown signal, test time was about 10.000000 seconds 00:25:33.643 00:25:33.643 Latency(us) 00:25:33.643 [2024-11-05T11:40:02.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.643 [2024-11-05T11:40:02.881Z] =================================================================================================================== 00:25:33.643 [2024-11-05T11:40:02.881Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 690777 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 690463 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 690463 ']' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 690463 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 690463 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 690463' 00:25:33.643 killing process with pid 690463 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 690463 00:25:33.643 12:40:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 690463 00:25:33.901 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:33.901 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.901 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:33.901 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:33.901 "subsystems": [ 00:25:33.901 { 00:25:33.901 "subsystem": "keyring", 00:25:33.901 "config": [ 00:25:33.901 { 00:25:33.901 "method": "keyring_file_add_key", 00:25:33.901 "params": { 00:25:33.901 "name": "key0", 00:25:33.901 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:33.901 } 00:25:33.901 } 00:25:33.901 ] 00:25:33.901 }, 00:25:33.901 { 00:25:33.901 "subsystem": "iobuf", 00:25:33.901 "config": [ 00:25:33.901 { 00:25:33.901 "method": "iobuf_set_options", 00:25:33.901 "params": { 00:25:33.901 "small_pool_count": 8192, 00:25:33.901 "large_pool_count": 1024, 00:25:33.901 "small_bufsize": 8192, 00:25:33.901 "large_bufsize": 135168, 00:25:33.901 "enable_numa": false 00:25:33.901 } 00:25:33.901 } 00:25:33.901 ] 00:25:33.901 }, 00:25:33.901 { 00:25:33.901 "subsystem": "sock", 00:25:33.901 "config": [ 00:25:33.901 { 00:25:33.901 "method": "sock_set_default_impl", 00:25:33.901 "params": { 00:25:33.901 "impl_name": "posix" 00:25:33.901 } 00:25:33.901 }, 00:25:33.901 { 00:25:33.901 "method": "sock_impl_set_options", 00:25:33.901 "params": { 00:25:33.901 "impl_name": "ssl", 00:25:33.901 "recv_buf_size": 4096, 00:25:33.901 "send_buf_size": 4096, 00:25:33.901 "enable_recv_pipe": true, 00:25:33.901 "enable_quickack": false, 00:25:33.901 "enable_placement_id": 0, 00:25:33.901 "enable_zerocopy_send_server": true, 00:25:33.901 "enable_zerocopy_send_client": false, 00:25:33.901 "zerocopy_threshold": 0, 00:25:33.901 "tls_version": 0, 00:25:33.901 "enable_ktls": false 00:25:33.901 } 00:25:33.901 }, 00:25:33.901 { 00:25:33.901 "method": "sock_impl_set_options", 00:25:33.901 "params": { 00:25:33.901 "impl_name": "posix", 00:25:33.901 "recv_buf_size": 2097152, 00:25:33.901 "send_buf_size": 2097152, 00:25:33.901 "enable_recv_pipe": true, 00:25:33.901 "enable_quickack": false, 00:25:33.901 "enable_placement_id": 0, 00:25:33.901 "enable_zerocopy_send_server": true, 00:25:33.901 "enable_zerocopy_send_client": false, 00:25:33.901 "zerocopy_threshold": 0, 00:25:33.901 "tls_version": 0, 00:25:33.901 "enable_ktls": false 00:25:33.901 } 00:25:33.901 } 00:25:33.901 ] 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "subsystem": "vmd", 00:25:33.902 "config": [] 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "subsystem": "accel", 00:25:33.902 "config": [ 00:25:33.902 { 00:25:33.902 "method": "accel_set_options", 00:25:33.902 "params": { 00:25:33.902 "small_cache_size": 128, 00:25:33.902 "large_cache_size": 16, 00:25:33.902 "task_count": 2048, 00:25:33.902 "sequence_count": 2048, 00:25:33.902 "buf_count": 2048 00:25:33.902 } 00:25:33.902 } 00:25:33.902 ] 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "subsystem": "bdev", 00:25:33.902 "config": [ 00:25:33.902 { 00:25:33.902 "method": "bdev_set_options", 00:25:33.902 "params": { 00:25:33.902 "bdev_io_pool_size": 65535, 00:25:33.902 "bdev_io_cache_size": 256, 00:25:33.902 "bdev_auto_examine": true, 00:25:33.902 "iobuf_small_cache_size": 128, 00:25:33.902 "iobuf_large_cache_size": 16 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "bdev_raid_set_options", 00:25:33.902 "params": { 00:25:33.902 "process_window_size_kb": 1024, 00:25:33.902 "process_max_bandwidth_mb_sec": 0 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "bdev_iscsi_set_options", 00:25:33.902 "params": { 00:25:33.902 "timeout_sec": 30 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "bdev_nvme_set_options", 00:25:33.902 "params": { 00:25:33.902 "action_on_timeout": "none", 00:25:33.902 "timeout_us": 0, 00:25:33.902 "timeout_admin_us": 0, 00:25:33.902 "keep_alive_timeout_ms": 10000, 00:25:33.902 "arbitration_burst": 0, 00:25:33.902 "low_priority_weight": 0, 00:25:33.902 "medium_priority_weight": 0, 00:25:33.902 "high_priority_weight": 0, 00:25:33.902 "nvme_adminq_poll_period_us": 10000, 00:25:33.902 "nvme_ioq_poll_period_us": 0, 00:25:33.902 "io_queue_requests": 0, 00:25:33.902 "delay_cmd_submit": true, 00:25:33.902 "transport_retry_count": 4, 00:25:33.902 "bdev_retry_count": 3, 00:25:33.902 "transport_ack_timeout": 0, 00:25:33.902 "ctrlr_loss_timeout_sec": 0, 00:25:33.902 "reconnect_delay_sec": 0, 00:25:33.902 "fast_io_fail_timeout_sec": 0, 00:25:33.902 "disable_auto_failback": false, 00:25:33.902 "generate_uuids": false, 00:25:33.902 "transport_tos": 0, 00:25:33.902 "nvme_error_stat": false, 00:25:33.902 "rdma_srq_size": 0, 00:25:33.902 "io_path_stat": false, 00:25:33.902 "allow_accel_sequence": false, 00:25:33.902 "rdma_max_cq_size": 0, 00:25:33.902 "rdma_cm_event_timeout_ms": 0, 00:25:33.902 "dhchap_digests": [ 00:25:33.902 "sha256", 00:25:33.902 "sha384", 00:25:33.902 "sha512" 00:25:33.902 ], 00:25:33.902 "dhchap_dhgroups": [ 00:25:33.902 "null", 00:25:33.902 "ffdhe2048", 00:25:33.902 "ffdhe3072", 00:25:33.902 "ffdhe4096", 00:25:33.902 "ffdhe6144", 00:25:33.902 "ffdhe8192" 00:25:33.902 ] 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "bdev_nvme_set_hotplug", 00:25:33.902 "params": { 00:25:33.902 "period_us": 100000, 00:25:33.902 "enable": false 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "bdev_malloc_create", 00:25:33.902 "params": { 00:25:33.902 "name": "malloc0", 00:25:33.902 "num_blocks": 8192, 00:25:33.902 "block_size": 4096, 00:25:33.902 "physical_block_size": 4096, 00:25:33.902 "uuid": "735d8de3-1f81-46fd-9908-180acd68368e", 00:25:33.902 "optimal_io_boundary": 0, 00:25:33.902 "md_size": 0, 00:25:33.902 "dif_type": 0, 00:25:33.902 "dif_is_head_of_md": false, 00:25:33.902 "dif_pi_format": 0 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "bdev_wait_for_examine" 00:25:33.902 } 00:25:33.902 ] 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "subsystem": "nbd", 00:25:33.902 "config": [] 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "subsystem": "scheduler", 00:25:33.902 "config": [ 00:25:33.902 { 00:25:33.902 "method": "framework_set_scheduler", 00:25:33.902 "params": { 00:25:33.902 "name": "static" 00:25:33.902 } 00:25:33.902 } 00:25:33.902 ] 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "subsystem": "nvmf", 00:25:33.902 "config": [ 00:25:33.902 { 00:25:33.902 "method": "nvmf_set_config", 00:25:33.902 "params": { 00:25:33.902 "discovery_filter": "match_any", 00:25:33.902 "admin_cmd_passthru": { 00:25:33.902 "identify_ctrlr": false 00:25:33.902 }, 00:25:33.902 "dhchap_digests": [ 00:25:33.902 "sha256", 00:25:33.902 "sha384", 00:25:33.902 "sha512" 00:25:33.902 ], 00:25:33.902 "dhchap_dhgroups": [ 00:25:33.902 "null", 00:25:33.902 "ffdhe2048", 00:25:33.902 "ffdhe3072", 00:25:33.902 "ffdhe4096", 00:25:33.902 "ffdhe6144", 00:25:33.902 "ffdhe8192" 00:25:33.902 ] 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_set_max_subsystems", 00:25:33.902 "params": { 00:25:33.902 "max_subsystems": 1024 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_set_crdt", 00:25:33.902 "params": { 00:25:33.902 "crdt1": 0, 00:25:33.902 "crdt2": 0, 00:25:33.902 "crdt3": 0 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_create_transport", 00:25:33.902 "params": { 00:25:33.902 "trtype": "TCP", 00:25:33.902 "max_queue_depth": 128, 00:25:33.902 "max_io_qpairs_per_ctrlr": 127, 00:25:33.902 "in_capsule_data_size": 4096, 00:25:33.902 "max_io_size": 131072, 00:25:33.902 "io_unit_size": 131072, 00:25:33.902 "max_aq_depth": 128, 00:25:33.902 "num_shared_buffers": 511, 00:25:33.902 "buf_cache_size": 4294967295, 00:25:33.902 "dif_insert_or_strip": false, 00:25:33.902 "zcopy": false, 00:25:33.902 "c2h_success": false, 00:25:33.902 "sock_priority": 0, 00:25:33.902 "abort_timeout_sec": 1, 00:25:33.902 "ack_timeout": 0, 00:25:33.902 "data_wr_pool_size": 0 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_create_subsystem", 00:25:33.902 "params": { 00:25:33.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.902 "allow_any_host": false, 00:25:33.902 "serial_number": "SPDK00000000000001", 00:25:33.902 "model_number": "SPDK bdev Controller", 00:25:33.902 "max_namespaces": 10, 00:25:33.902 "min_cntlid": 1, 00:25:33.902 "max_cntlid": 65519, 00:25:33.902 "ana_reporting": false 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_subsystem_add_host", 00:25:33.902 "params": { 00:25:33.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.902 "host": "nqn.2016-06.io.spdk:host1", 00:25:33.902 "psk": "key0" 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_subsystem_add_ns", 00:25:33.902 "params": { 00:25:33.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.902 "namespace": { 00:25:33.902 "nsid": 1, 00:25:33.902 "bdev_name": "malloc0", 00:25:33.902 "nguid": "735D8DE31F8146FD9908180ACD68368E", 00:25:33.902 "uuid": "735d8de3-1f81-46fd-9908-180acd68368e", 00:25:33.902 "no_auto_visible": false 00:25:33.902 } 00:25:33.902 } 00:25:33.902 }, 00:25:33.902 { 00:25:33.902 "method": "nvmf_subsystem_add_listener", 00:25:33.902 "params": { 00:25:33.902 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.902 "listen_address": { 00:25:33.902 "trtype": "TCP", 00:25:33.902 "adrfam": "IPv4", 00:25:33.902 "traddr": "10.0.0.2", 00:25:33.902 "trsvcid": "4420" 00:25:33.902 }, 00:25:33.902 "secure_channel": true 00:25:33.902 } 00:25:33.903 } 00:25:33.903 ] 00:25:33.903 } 00:25:33.903 ] 00:25:33.903 }' 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=691033 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 691033 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 691033 ']' 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:33.903 12:40:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:33.903 [2024-11-05 12:40:03.127246] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:33.903 [2024-11-05 12:40:03.127324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.160 [2024-11-05 12:40:03.200345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.160 [2024-11-05 12:40:03.245000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.160 [2024-11-05 12:40:03.245063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.160 [2024-11-05 12:40:03.245077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.160 [2024-11-05 12:40:03.245089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.160 [2024-11-05 12:40:03.245099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.160 [2024-11-05 12:40:03.245714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.417 [2024-11-05 12:40:03.489088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.417 [2024-11-05 12:40:03.521112] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:34.417 [2024-11-05 12:40:03.521388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=691181 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 691181 /var/tmp/bdevperf.sock 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 691181 ']' 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.982 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:34.982 "subsystems": [ 00:25:34.982 { 00:25:34.982 "subsystem": "keyring", 00:25:34.982 "config": [ 00:25:34.982 { 00:25:34.982 "method": "keyring_file_add_key", 00:25:34.982 "params": { 00:25:34.982 "name": "key0", 00:25:34.982 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:34.982 } 00:25:34.982 } 00:25:34.982 ] 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "subsystem": "iobuf", 00:25:34.982 "config": [ 00:25:34.982 { 00:25:34.982 "method": "iobuf_set_options", 00:25:34.982 "params": { 00:25:34.982 "small_pool_count": 8192, 00:25:34.982 "large_pool_count": 1024, 00:25:34.982 "small_bufsize": 8192, 00:25:34.982 "large_bufsize": 135168, 00:25:34.982 "enable_numa": false 00:25:34.982 } 00:25:34.982 } 00:25:34.982 ] 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "subsystem": "sock", 00:25:34.982 "config": [ 00:25:34.982 { 00:25:34.982 "method": "sock_set_default_impl", 00:25:34.982 "params": { 00:25:34.982 "impl_name": "posix" 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "sock_impl_set_options", 00:25:34.982 "params": { 00:25:34.982 "impl_name": "ssl", 00:25:34.982 "recv_buf_size": 4096, 00:25:34.982 "send_buf_size": 4096, 00:25:34.982 "enable_recv_pipe": true, 00:25:34.982 "enable_quickack": false, 00:25:34.982 "enable_placement_id": 0, 00:25:34.982 "enable_zerocopy_send_server": true, 00:25:34.982 "enable_zerocopy_send_client": false, 00:25:34.982 "zerocopy_threshold": 0, 00:25:34.982 "tls_version": 0, 00:25:34.982 "enable_ktls": false 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "sock_impl_set_options", 00:25:34.982 "params": { 00:25:34.982 "impl_name": "posix", 00:25:34.982 "recv_buf_size": 2097152, 00:25:34.982 "send_buf_size": 2097152, 00:25:34.982 "enable_recv_pipe": true, 00:25:34.982 "enable_quickack": false, 00:25:34.982 "enable_placement_id": 0, 00:25:34.982 "enable_zerocopy_send_server": true, 00:25:34.982 "enable_zerocopy_send_client": false, 00:25:34.982 "zerocopy_threshold": 0, 00:25:34.982 "tls_version": 0, 00:25:34.982 "enable_ktls": false 00:25:34.982 } 00:25:34.982 } 00:25:34.982 ] 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "subsystem": "vmd", 00:25:34.982 "config": [] 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "subsystem": "accel", 00:25:34.982 "config": [ 00:25:34.982 { 00:25:34.982 "method": "accel_set_options", 00:25:34.982 "params": { 00:25:34.982 "small_cache_size": 128, 00:25:34.982 "large_cache_size": 16, 00:25:34.982 "task_count": 2048, 00:25:34.982 "sequence_count": 2048, 00:25:34.982 "buf_count": 2048 00:25:34.982 } 00:25:34.982 } 00:25:34.982 ] 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "subsystem": "bdev", 00:25:34.982 "config": [ 00:25:34.982 { 00:25:34.982 "method": "bdev_set_options", 00:25:34.982 "params": { 00:25:34.982 "bdev_io_pool_size": 65535, 00:25:34.982 "bdev_io_cache_size": 256, 00:25:34.982 "bdev_auto_examine": true, 00:25:34.982 "iobuf_small_cache_size": 128, 00:25:34.982 "iobuf_large_cache_size": 16 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "bdev_raid_set_options", 00:25:34.982 "params": { 00:25:34.982 "process_window_size_kb": 1024, 00:25:34.982 "process_max_bandwidth_mb_sec": 0 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "bdev_iscsi_set_options", 00:25:34.982 "params": { 00:25:34.982 "timeout_sec": 30 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "bdev_nvme_set_options", 00:25:34.982 "params": { 00:25:34.982 "action_on_timeout": "none", 00:25:34.982 "timeout_us": 0, 00:25:34.982 "timeout_admin_us": 0, 00:25:34.982 "keep_alive_timeout_ms": 10000, 00:25:34.982 "arbitration_burst": 0, 00:25:34.982 "low_priority_weight": 0, 00:25:34.982 "medium_priority_weight": 0, 00:25:34.982 "high_priority_weight": 0, 00:25:34.982 "nvme_adminq_poll_period_us": 10000, 00:25:34.982 "nvme_ioq_poll_period_us": 0, 00:25:34.982 "io_queue_requests": 512, 00:25:34.982 "delay_cmd_submit": true, 00:25:34.982 "transport_retry_count": 4, 00:25:34.982 "bdev_retry_count": 3, 00:25:34.982 "transport_ack_timeout": 0, 00:25:34.982 "ctrlr_loss_timeout_sec": 0, 00:25:34.982 "reconnect_delay_sec": 0, 00:25:34.982 "fast_io_fail_timeout_sec": 0, 00:25:34.982 "disable_auto_failback": false, 00:25:34.982 "generate_uuids": false, 00:25:34.982 "transport_tos": 0, 00:25:34.982 "nvme_error_stat": false, 00:25:34.982 "rdma_srq_size": 0, 00:25:34.982 "io_path_stat": false, 00:25:34.982 "allow_accel_sequence": false, 00:25:34.982 "rdma_max_cq_size": 0, 00:25:34.982 "rdma_cm_event_timeout_ms": 0, 00:25:34.982 "dhchap_digests": [ 00:25:34.982 "sha256", 00:25:34.982 "sha384", 00:25:34.982 "sha512" 00:25:34.982 ], 00:25:34.982 "dhchap_dhgroups": [ 00:25:34.982 "null", 00:25:34.982 "ffdhe2048", 00:25:34.982 "ffdhe3072", 00:25:34.982 "ffdhe4096", 00:25:34.982 "ffdhe6144", 00:25:34.982 "ffdhe8192" 00:25:34.982 ] 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "bdev_nvme_attach_controller", 00:25:34.982 "params": { 00:25:34.982 "name": "TLSTEST", 00:25:34.982 "trtype": "TCP", 00:25:34.982 "adrfam": "IPv4", 00:25:34.982 "traddr": "10.0.0.2", 00:25:34.982 "trsvcid": "4420", 00:25:34.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.982 "prchk_reftag": false, 00:25:34.982 "prchk_guard": false, 00:25:34.982 "ctrlr_loss_timeout_sec": 0, 00:25:34.982 "reconnect_delay_sec": 0, 00:25:34.982 "fast_io_fail_timeout_sec": 0, 00:25:34.982 "psk": "key0", 00:25:34.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.982 "hdgst": false, 00:25:34.982 "ddgst": false, 00:25:34.982 "multipath": "multipath" 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "bdev_nvme_set_hotplug", 00:25:34.982 "params": { 00:25:34.982 "period_us": 100000, 00:25:34.982 "enable": false 00:25:34.982 } 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "method": "bdev_wait_for_examine" 00:25:34.982 } 00:25:34.982 ] 00:25:34.982 }, 00:25:34.982 { 00:25:34.982 "subsystem": "nbd", 00:25:34.982 "config": [] 00:25:34.982 } 00:25:34.982 ] 00:25:34.982 }' 00:25:34.983 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:34.983 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.240 [2024-11-05 12:40:04.253717] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:35.240 [2024-11-05 12:40:04.253811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691181 ] 00:25:35.240 [2024-11-05 12:40:04.323014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.240 [2024-11-05 12:40:04.369785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.498 [2024-11-05 12:40:04.544634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:35.498 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:35.498 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:35.498 12:40:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:35.756 Running I/O for 10 seconds... 00:25:37.618 3436.00 IOPS, 13.42 MiB/s [2024-11-05T11:40:07.788Z] 3505.50 IOPS, 13.69 MiB/s [2024-11-05T11:40:09.158Z] 3517.00 IOPS, 13.74 MiB/s [2024-11-05T11:40:10.089Z] 3516.75 IOPS, 13.74 MiB/s [2024-11-05T11:40:11.098Z] 3524.80 IOPS, 13.77 MiB/s [2024-11-05T11:40:12.030Z] 3536.67 IOPS, 13.82 MiB/s [2024-11-05T11:40:12.961Z] 3535.00 IOPS, 13.81 MiB/s [2024-11-05T11:40:13.893Z] 3543.62 IOPS, 13.84 MiB/s [2024-11-05T11:40:14.825Z] 3537.89 IOPS, 13.82 MiB/s [2024-11-05T11:40:15.083Z] 3540.00 IOPS, 13.83 MiB/s 00:25:45.845 Latency(us) 00:25:45.845 [2024-11-05T11:40:15.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.845 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:45.846 Verification LBA range: start 0x0 length 0x2000 00:25:45.846 TLSTESTn1 : 10.04 3538.85 13.82 0.00 0.00 36082.54 6019.60 40195.41 00:25:45.846 [2024-11-05T11:40:15.084Z] =================================================================================================================== 00:25:45.846 [2024-11-05T11:40:15.084Z] Total : 3538.85 13.82 0.00 0.00 36082.54 6019.60 40195.41 00:25:45.846 { 00:25:45.846 "results": [ 00:25:45.846 { 00:25:45.846 "job": "TLSTESTn1", 00:25:45.846 "core_mask": "0x4", 00:25:45.846 "workload": "verify", 00:25:45.846 "status": "finished", 00:25:45.846 "verify_range": { 00:25:45.846 "start": 0, 00:25:45.846 "length": 8192 00:25:45.846 }, 00:25:45.846 "queue_depth": 128, 00:25:45.846 "io_size": 4096, 00:25:45.846 "runtime": 10.039127, 00:25:45.846 "iops": 3538.85352780177, 00:25:45.846 "mibps": 13.823646592975663, 00:25:45.846 "io_failed": 0, 00:25:45.846 "io_timeout": 0, 00:25:45.846 "avg_latency_us": 36082.537059575974, 00:25:45.846 "min_latency_us": 6019.602962962963, 00:25:45.846 "max_latency_us": 40195.41333333333 00:25:45.846 } 00:25:45.846 ], 00:25:45.846 "core_count": 1 00:25:45.846 } 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 691181 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 691181 ']' 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 691181 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 691181 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 691181' 00:25:45.846 killing process with pid 691181 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 691181 00:25:45.846 Received shutdown signal, test time was about 10.000000 seconds 00:25:45.846 00:25:45.846 Latency(us) 00:25:45.846 [2024-11-05T11:40:15.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.846 [2024-11-05T11:40:15.084Z] =================================================================================================================== 00:25:45.846 [2024-11-05T11:40:15.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.846 12:40:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 691181 00:25:45.846 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 691033 00:25:45.846 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 691033 ']' 00:25:45.846 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 691033 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 691033 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 691033' 00:25:46.106 killing process with pid 691033 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 691033 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 691033 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=692504 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 692504 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 692504 ']' 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.106 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.366 [2024-11-05 12:40:15.392599] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:46.366 [2024-11-05 12:40:15.392696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.366 [2024-11-05 12:40:15.463458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.366 [2024-11-05 12:40:15.503469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.366 [2024-11-05 12:40:15.503534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.366 [2024-11-05 12:40:15.503557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.366 [2024-11-05 12:40:15.503567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.366 [2024-11-05 12:40:15.503576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.366 [2024-11-05 12:40:15.504124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.o9qx4O8mZV 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.o9qx4O8mZV 00:25:46.623 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:46.881 [2024-11-05 12:40:15.922397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.881 12:40:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:47.138 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:47.396 [2024-11-05 12:40:16.455834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:47.396 [2024-11-05 12:40:16.456141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.396 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:47.653 malloc0 00:25:47.653 12:40:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:47.911 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:48.168 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=692794 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 692794 /var/tmp/bdevperf.sock 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 692794 ']' 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:48.425 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:48.683 [2024-11-05 12:40:17.672874] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:48.683 [2024-11-05 12:40:17.672965] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692794 ] 00:25:48.683 [2024-11-05 12:40:17.740586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.683 [2024-11-05 12:40:17.786643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.683 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:48.683 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:48.683 12:40:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:48.940 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:49.197 [2024-11-05 12:40:18.415079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:49.454 nvme0n1 00:25:49.454 12:40:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.454 Running I/O for 1 seconds... 00:25:50.643 3405.00 IOPS, 13.30 MiB/s 00:25:50.643 Latency(us) 00:25:50.643 [2024-11-05T11:40:19.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.643 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:50.643 Verification LBA range: start 0x0 length 0x2000 00:25:50.643 nvme0n1 : 1.03 3415.64 13.34 0.00 0.00 36942.73 6043.88 34175.81 00:25:50.643 [2024-11-05T11:40:19.881Z] =================================================================================================================== 00:25:50.643 [2024-11-05T11:40:19.881Z] Total : 3415.64 13.34 0.00 0.00 36942.73 6043.88 34175.81 00:25:50.643 { 00:25:50.643 "results": [ 00:25:50.643 { 00:25:50.643 "job": "nvme0n1", 00:25:50.643 "core_mask": "0x2", 00:25:50.643 "workload": "verify", 00:25:50.643 "status": "finished", 00:25:50.643 "verify_range": { 00:25:50.643 "start": 0, 00:25:50.643 "length": 8192 00:25:50.643 }, 00:25:50.643 "queue_depth": 128, 00:25:50.643 "io_size": 4096, 00:25:50.643 "runtime": 1.034651, 00:25:50.643 "iops": 3415.6445023491015, 00:25:50.643 "mibps": 13.342361337301178, 00:25:50.643 "io_failed": 0, 00:25:50.643 "io_timeout": 0, 00:25:50.643 "avg_latency_us": 36942.727954893206, 00:25:50.643 "min_latency_us": 6043.875555555555, 00:25:50.643 "max_latency_us": 34175.81037037037 00:25:50.643 } 00:25:50.643 ], 00:25:50.643 "core_count": 1 00:25:50.643 } 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 692794 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 692794 ']' 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 692794 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 692794 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 692794' 00:25:50.643 killing process with pid 692794 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 692794 00:25:50.643 Received shutdown signal, test time was about 1.000000 seconds 00:25:50.643 00:25:50.643 Latency(us) 00:25:50.643 [2024-11-05T11:40:19.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.643 [2024-11-05T11:40:19.881Z] =================================================================================================================== 00:25:50.643 [2024-11-05T11:40:19.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 692794 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 692504 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 692504 ']' 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 692504 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.643 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 692504 00:25:50.900 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:50.900 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:50.900 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 692504' 00:25:50.900 killing process with pid 692504 00:25:50.900 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 692504 00:25:50.900 12:40:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 692504 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=693069 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 693069 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 693069 ']' 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:50.900 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:50.900 [2024-11-05 12:40:20.136832] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:50.900 [2024-11-05 12:40:20.136958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.158 [2024-11-05 12:40:20.209651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.158 [2024-11-05 12:40:20.252868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.158 [2024-11-05 12:40:20.252933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.158 [2024-11-05 12:40:20.252956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.158 [2024-11-05 12:40:20.252967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.158 [2024-11-05 12:40:20.252976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.158 [2024-11-05 12:40:20.253523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.158 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.158 [2024-11-05 12:40:20.396811] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.416 malloc0 00:25:51.416 [2024-11-05 12:40:20.427938] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:51.416 [2024-11-05 12:40:20.428198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=693215 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 693215 /var/tmp/bdevperf.sock 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 693215 ']' 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.416 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:51.416 [2024-11-05 12:40:20.500026] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:51.416 [2024-11-05 12:40:20.500105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693215 ] 00:25:51.416 [2024-11-05 12:40:20.566733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.416 [2024-11-05 12:40:20.612213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.674 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:51.674 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:51.674 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o9qx4O8mZV 00:25:51.931 12:40:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:52.189 [2024-11-05 12:40:21.261503] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:52.189 nvme0n1 00:25:52.189 12:40:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:52.447 Running I/O for 1 seconds... 00:25:53.380 3513.00 IOPS, 13.72 MiB/s 00:25:53.380 Latency(us) 00:25:53.380 [2024-11-05T11:40:22.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.380 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:53.380 Verification LBA range: start 0x0 length 0x2000 00:25:53.380 nvme0n1 : 1.02 3561.67 13.91 0.00 0.00 35569.11 7524.50 28932.93 00:25:53.380 [2024-11-05T11:40:22.618Z] =================================================================================================================== 00:25:53.380 [2024-11-05T11:40:22.618Z] Total : 3561.67 13.91 0.00 0.00 35569.11 7524.50 28932.93 00:25:53.380 { 00:25:53.380 "results": [ 00:25:53.380 { 00:25:53.380 "job": "nvme0n1", 00:25:53.380 "core_mask": "0x2", 00:25:53.380 "workload": "verify", 00:25:53.380 "status": "finished", 00:25:53.380 "verify_range": { 00:25:53.380 "start": 0, 00:25:53.380 "length": 8192 00:25:53.380 }, 00:25:53.380 "queue_depth": 128, 00:25:53.380 "io_size": 4096, 00:25:53.380 "runtime": 1.022274, 00:25:53.380 "iops": 3561.667419889384, 00:25:53.380 "mibps": 13.912763358942906, 00:25:53.380 "io_failed": 0, 00:25:53.380 "io_timeout": 0, 00:25:53.380 "avg_latency_us": 35569.114516768896, 00:25:53.380 "min_latency_us": 7524.503703703704, 00:25:53.380 "max_latency_us": 28932.93037037037 00:25:53.380 } 00:25:53.380 ], 00:25:53.380 "core_count": 1 00:25:53.380 } 00:25:53.380 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:53.380 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.380 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:53.380 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.380 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:53.380 "subsystems": [ 00:25:53.380 { 00:25:53.380 "subsystem": "keyring", 00:25:53.380 "config": [ 00:25:53.380 { 00:25:53.380 "method": "keyring_file_add_key", 00:25:53.380 "params": { 00:25:53.380 "name": "key0", 00:25:53.380 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:53.380 } 00:25:53.380 } 00:25:53.380 ] 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "subsystem": "iobuf", 00:25:53.380 "config": [ 00:25:53.380 { 00:25:53.380 "method": "iobuf_set_options", 00:25:53.380 "params": { 00:25:53.380 "small_pool_count": 8192, 00:25:53.380 "large_pool_count": 1024, 00:25:53.380 "small_bufsize": 8192, 00:25:53.380 "large_bufsize": 135168, 00:25:53.380 "enable_numa": false 00:25:53.380 } 00:25:53.380 } 00:25:53.380 ] 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "subsystem": "sock", 00:25:53.380 "config": [ 00:25:53.380 { 00:25:53.380 "method": "sock_set_default_impl", 00:25:53.380 "params": { 00:25:53.380 "impl_name": "posix" 00:25:53.380 } 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "method": "sock_impl_set_options", 00:25:53.380 "params": { 00:25:53.380 "impl_name": "ssl", 00:25:53.380 "recv_buf_size": 4096, 00:25:53.380 "send_buf_size": 4096, 00:25:53.380 "enable_recv_pipe": true, 00:25:53.380 "enable_quickack": false, 00:25:53.380 "enable_placement_id": 0, 00:25:53.380 "enable_zerocopy_send_server": true, 00:25:53.380 "enable_zerocopy_send_client": false, 00:25:53.380 "zerocopy_threshold": 0, 00:25:53.380 "tls_version": 0, 00:25:53.380 "enable_ktls": false 00:25:53.380 } 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "method": "sock_impl_set_options", 00:25:53.380 "params": { 00:25:53.380 "impl_name": "posix", 00:25:53.380 "recv_buf_size": 2097152, 00:25:53.380 "send_buf_size": 2097152, 00:25:53.380 "enable_recv_pipe": true, 00:25:53.380 "enable_quickack": false, 00:25:53.380 "enable_placement_id": 0, 00:25:53.380 "enable_zerocopy_send_server": true, 00:25:53.380 "enable_zerocopy_send_client": false, 00:25:53.380 "zerocopy_threshold": 0, 00:25:53.380 "tls_version": 0, 00:25:53.380 "enable_ktls": false 00:25:53.380 } 00:25:53.380 } 00:25:53.380 ] 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "subsystem": "vmd", 00:25:53.380 "config": [] 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "subsystem": "accel", 00:25:53.380 "config": [ 00:25:53.380 { 00:25:53.380 "method": "accel_set_options", 00:25:53.380 "params": { 00:25:53.380 "small_cache_size": 128, 00:25:53.380 "large_cache_size": 16, 00:25:53.380 "task_count": 2048, 00:25:53.380 "sequence_count": 2048, 00:25:53.380 "buf_count": 2048 00:25:53.380 } 00:25:53.380 } 00:25:53.380 ] 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "subsystem": "bdev", 00:25:53.380 "config": [ 00:25:53.380 { 00:25:53.380 "method": "bdev_set_options", 00:25:53.380 "params": { 00:25:53.380 "bdev_io_pool_size": 65535, 00:25:53.380 "bdev_io_cache_size": 256, 00:25:53.380 "bdev_auto_examine": true, 00:25:53.380 "iobuf_small_cache_size": 128, 00:25:53.380 "iobuf_large_cache_size": 16 00:25:53.380 } 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "method": "bdev_raid_set_options", 00:25:53.380 "params": { 00:25:53.380 "process_window_size_kb": 1024, 00:25:53.380 "process_max_bandwidth_mb_sec": 0 00:25:53.380 } 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "method": "bdev_iscsi_set_options", 00:25:53.380 "params": { 00:25:53.380 "timeout_sec": 30 00:25:53.380 } 00:25:53.380 }, 00:25:53.380 { 00:25:53.380 "method": "bdev_nvme_set_options", 00:25:53.380 "params": { 00:25:53.380 "action_on_timeout": "none", 00:25:53.380 "timeout_us": 0, 00:25:53.380 "timeout_admin_us": 0, 00:25:53.380 "keep_alive_timeout_ms": 10000, 00:25:53.381 "arbitration_burst": 0, 00:25:53.381 "low_priority_weight": 0, 00:25:53.381 "medium_priority_weight": 0, 00:25:53.381 "high_priority_weight": 0, 00:25:53.381 "nvme_adminq_poll_period_us": 10000, 00:25:53.381 "nvme_ioq_poll_period_us": 0, 00:25:53.381 "io_queue_requests": 0, 00:25:53.381 "delay_cmd_submit": true, 00:25:53.381 "transport_retry_count": 4, 00:25:53.381 "bdev_retry_count": 3, 00:25:53.381 "transport_ack_timeout": 0, 00:25:53.381 "ctrlr_loss_timeout_sec": 0, 00:25:53.381 "reconnect_delay_sec": 0, 00:25:53.381 "fast_io_fail_timeout_sec": 0, 00:25:53.381 "disable_auto_failback": false, 00:25:53.381 "generate_uuids": false, 00:25:53.381 "transport_tos": 0, 00:25:53.381 "nvme_error_stat": false, 00:25:53.381 "rdma_srq_size": 0, 00:25:53.381 "io_path_stat": false, 00:25:53.381 "allow_accel_sequence": false, 00:25:53.381 "rdma_max_cq_size": 0, 00:25:53.381 "rdma_cm_event_timeout_ms": 0, 00:25:53.381 "dhchap_digests": [ 00:25:53.381 "sha256", 00:25:53.381 "sha384", 00:25:53.381 "sha512" 00:25:53.381 ], 00:25:53.381 "dhchap_dhgroups": [ 00:25:53.381 "null", 00:25:53.381 "ffdhe2048", 00:25:53.381 "ffdhe3072", 00:25:53.381 "ffdhe4096", 00:25:53.381 "ffdhe6144", 00:25:53.381 "ffdhe8192" 00:25:53.381 ] 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "bdev_nvme_set_hotplug", 00:25:53.381 "params": { 00:25:53.381 "period_us": 100000, 00:25:53.381 "enable": false 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "bdev_malloc_create", 00:25:53.381 "params": { 00:25:53.381 "name": "malloc0", 00:25:53.381 "num_blocks": 8192, 00:25:53.381 "block_size": 4096, 00:25:53.381 "physical_block_size": 4096, 00:25:53.381 "uuid": "d2b9bc1e-6f19-4364-a400-7650f74824d0", 00:25:53.381 "optimal_io_boundary": 0, 00:25:53.381 "md_size": 0, 00:25:53.381 "dif_type": 0, 00:25:53.381 "dif_is_head_of_md": false, 00:25:53.381 "dif_pi_format": 0 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "bdev_wait_for_examine" 00:25:53.381 } 00:25:53.381 ] 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "subsystem": "nbd", 00:25:53.381 "config": [] 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "subsystem": "scheduler", 00:25:53.381 "config": [ 00:25:53.381 { 00:25:53.381 "method": "framework_set_scheduler", 00:25:53.381 "params": { 00:25:53.381 "name": "static" 00:25:53.381 } 00:25:53.381 } 00:25:53.381 ] 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "subsystem": "nvmf", 00:25:53.381 "config": [ 00:25:53.381 { 00:25:53.381 "method": "nvmf_set_config", 00:25:53.381 "params": { 00:25:53.381 "discovery_filter": "match_any", 00:25:53.381 "admin_cmd_passthru": { 00:25:53.381 "identify_ctrlr": false 00:25:53.381 }, 00:25:53.381 "dhchap_digests": [ 00:25:53.381 "sha256", 00:25:53.381 "sha384", 00:25:53.381 "sha512" 00:25:53.381 ], 00:25:53.381 "dhchap_dhgroups": [ 00:25:53.381 "null", 00:25:53.381 "ffdhe2048", 00:25:53.381 "ffdhe3072", 00:25:53.381 "ffdhe4096", 00:25:53.381 "ffdhe6144", 00:25:53.381 "ffdhe8192" 00:25:53.381 ] 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_set_max_subsystems", 00:25:53.381 "params": { 00:25:53.381 "max_subsystems": 1024 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_set_crdt", 00:25:53.381 "params": { 00:25:53.381 "crdt1": 0, 00:25:53.381 "crdt2": 0, 00:25:53.381 "crdt3": 0 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_create_transport", 00:25:53.381 "params": { 00:25:53.381 "trtype": "TCP", 00:25:53.381 "max_queue_depth": 128, 00:25:53.381 "max_io_qpairs_per_ctrlr": 127, 00:25:53.381 "in_capsule_data_size": 4096, 00:25:53.381 "max_io_size": 131072, 00:25:53.381 "io_unit_size": 131072, 00:25:53.381 "max_aq_depth": 128, 00:25:53.381 "num_shared_buffers": 511, 00:25:53.381 "buf_cache_size": 4294967295, 00:25:53.381 "dif_insert_or_strip": false, 00:25:53.381 "zcopy": false, 00:25:53.381 "c2h_success": false, 00:25:53.381 "sock_priority": 0, 00:25:53.381 "abort_timeout_sec": 1, 00:25:53.381 "ack_timeout": 0, 00:25:53.381 "data_wr_pool_size": 0 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_create_subsystem", 00:25:53.381 "params": { 00:25:53.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.381 "allow_any_host": false, 00:25:53.381 "serial_number": "00000000000000000000", 00:25:53.381 "model_number": "SPDK bdev Controller", 00:25:53.381 "max_namespaces": 32, 00:25:53.381 "min_cntlid": 1, 00:25:53.381 "max_cntlid": 65519, 00:25:53.381 "ana_reporting": false 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_subsystem_add_host", 00:25:53.381 "params": { 00:25:53.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.381 "host": "nqn.2016-06.io.spdk:host1", 00:25:53.381 "psk": "key0" 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_subsystem_add_ns", 00:25:53.381 "params": { 00:25:53.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.381 "namespace": { 00:25:53.381 "nsid": 1, 00:25:53.381 "bdev_name": "malloc0", 00:25:53.381 "nguid": "D2B9BC1E6F194364A4007650F74824D0", 00:25:53.381 "uuid": "d2b9bc1e-6f19-4364-a400-7650f74824d0", 00:25:53.381 "no_auto_visible": false 00:25:53.381 } 00:25:53.381 } 00:25:53.381 }, 00:25:53.381 { 00:25:53.381 "method": "nvmf_subsystem_add_listener", 00:25:53.381 "params": { 00:25:53.381 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.381 "listen_address": { 00:25:53.381 "trtype": "TCP", 00:25:53.381 "adrfam": "IPv4", 00:25:53.381 "traddr": "10.0.0.2", 00:25:53.381 "trsvcid": "4420" 00:25:53.381 }, 00:25:53.381 "secure_channel": false, 00:25:53.381 "sock_impl": "ssl" 00:25:53.381 } 00:25:53.381 } 00:25:53.381 ] 00:25:53.381 } 00:25:53.381 ] 00:25:53.381 }' 00:25:53.381 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:53.956 "subsystems": [ 00:25:53.956 { 00:25:53.956 "subsystem": "keyring", 00:25:53.956 "config": [ 00:25:53.956 { 00:25:53.956 "method": "keyring_file_add_key", 00:25:53.956 "params": { 00:25:53.956 "name": "key0", 00:25:53.956 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:53.956 } 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "subsystem": "iobuf", 00:25:53.956 "config": [ 00:25:53.956 { 00:25:53.956 "method": "iobuf_set_options", 00:25:53.956 "params": { 00:25:53.956 "small_pool_count": 8192, 00:25:53.956 "large_pool_count": 1024, 00:25:53.956 "small_bufsize": 8192, 00:25:53.956 "large_bufsize": 135168, 00:25:53.956 "enable_numa": false 00:25:53.956 } 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "subsystem": "sock", 00:25:53.956 "config": [ 00:25:53.956 { 00:25:53.956 "method": "sock_set_default_impl", 00:25:53.956 "params": { 00:25:53.956 "impl_name": "posix" 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "sock_impl_set_options", 00:25:53.956 "params": { 00:25:53.956 "impl_name": "ssl", 00:25:53.956 "recv_buf_size": 4096, 00:25:53.956 "send_buf_size": 4096, 00:25:53.956 "enable_recv_pipe": true, 00:25:53.956 "enable_quickack": false, 00:25:53.956 "enable_placement_id": 0, 00:25:53.956 "enable_zerocopy_send_server": true, 00:25:53.956 "enable_zerocopy_send_client": false, 00:25:53.956 "zerocopy_threshold": 0, 00:25:53.956 "tls_version": 0, 00:25:53.956 "enable_ktls": false 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "sock_impl_set_options", 00:25:53.956 "params": { 00:25:53.956 "impl_name": "posix", 00:25:53.956 "recv_buf_size": 2097152, 00:25:53.956 "send_buf_size": 2097152, 00:25:53.956 "enable_recv_pipe": true, 00:25:53.956 "enable_quickack": false, 00:25:53.956 "enable_placement_id": 0, 00:25:53.956 "enable_zerocopy_send_server": true, 00:25:53.956 "enable_zerocopy_send_client": false, 00:25:53.956 "zerocopy_threshold": 0, 00:25:53.956 "tls_version": 0, 00:25:53.956 "enable_ktls": false 00:25:53.956 } 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "subsystem": "vmd", 00:25:53.956 "config": [] 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "subsystem": "accel", 00:25:53.956 "config": [ 00:25:53.956 { 00:25:53.956 "method": "accel_set_options", 00:25:53.956 "params": { 00:25:53.956 "small_cache_size": 128, 00:25:53.956 "large_cache_size": 16, 00:25:53.956 "task_count": 2048, 00:25:53.956 "sequence_count": 2048, 00:25:53.956 "buf_count": 2048 00:25:53.956 } 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "subsystem": "bdev", 00:25:53.956 "config": [ 00:25:53.956 { 00:25:53.956 "method": "bdev_set_options", 00:25:53.956 "params": { 00:25:53.956 "bdev_io_pool_size": 65535, 00:25:53.956 "bdev_io_cache_size": 256, 00:25:53.956 "bdev_auto_examine": true, 00:25:53.956 "iobuf_small_cache_size": 128, 00:25:53.956 "iobuf_large_cache_size": 16 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_raid_set_options", 00:25:53.956 "params": { 00:25:53.956 "process_window_size_kb": 1024, 00:25:53.956 "process_max_bandwidth_mb_sec": 0 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_iscsi_set_options", 00:25:53.956 "params": { 00:25:53.956 "timeout_sec": 30 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_nvme_set_options", 00:25:53.956 "params": { 00:25:53.956 "action_on_timeout": "none", 00:25:53.956 "timeout_us": 0, 00:25:53.956 "timeout_admin_us": 0, 00:25:53.956 "keep_alive_timeout_ms": 10000, 00:25:53.956 "arbitration_burst": 0, 00:25:53.956 "low_priority_weight": 0, 00:25:53.956 "medium_priority_weight": 0, 00:25:53.956 "high_priority_weight": 0, 00:25:53.956 "nvme_adminq_poll_period_us": 10000, 00:25:53.956 "nvme_ioq_poll_period_us": 0, 00:25:53.956 "io_queue_requests": 512, 00:25:53.956 "delay_cmd_submit": true, 00:25:53.956 "transport_retry_count": 4, 00:25:53.956 "bdev_retry_count": 3, 00:25:53.956 "transport_ack_timeout": 0, 00:25:53.956 "ctrlr_loss_timeout_sec": 0, 00:25:53.956 "reconnect_delay_sec": 0, 00:25:53.956 "fast_io_fail_timeout_sec": 0, 00:25:53.956 "disable_auto_failback": false, 00:25:53.956 "generate_uuids": false, 00:25:53.956 "transport_tos": 0, 00:25:53.956 "nvme_error_stat": false, 00:25:53.956 "rdma_srq_size": 0, 00:25:53.956 "io_path_stat": false, 00:25:53.956 "allow_accel_sequence": false, 00:25:53.956 "rdma_max_cq_size": 0, 00:25:53.956 "rdma_cm_event_timeout_ms": 0, 00:25:53.956 "dhchap_digests": [ 00:25:53.956 "sha256", 00:25:53.956 "sha384", 00:25:53.956 "sha512" 00:25:53.956 ], 00:25:53.956 "dhchap_dhgroups": [ 00:25:53.956 "null", 00:25:53.956 "ffdhe2048", 00:25:53.956 "ffdhe3072", 00:25:53.956 "ffdhe4096", 00:25:53.956 "ffdhe6144", 00:25:53.956 "ffdhe8192" 00:25:53.956 ] 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_nvme_attach_controller", 00:25:53.956 "params": { 00:25:53.956 "name": "nvme0", 00:25:53.956 "trtype": "TCP", 00:25:53.956 "adrfam": "IPv4", 00:25:53.956 "traddr": "10.0.0.2", 00:25:53.956 "trsvcid": "4420", 00:25:53.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.956 "prchk_reftag": false, 00:25:53.956 "prchk_guard": false, 00:25:53.956 "ctrlr_loss_timeout_sec": 0, 00:25:53.956 "reconnect_delay_sec": 0, 00:25:53.956 "fast_io_fail_timeout_sec": 0, 00:25:53.956 "psk": "key0", 00:25:53.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.956 "hdgst": false, 00:25:53.956 "ddgst": false, 00:25:53.956 "multipath": "multipath" 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_nvme_set_hotplug", 00:25:53.956 "params": { 00:25:53.956 "period_us": 100000, 00:25:53.956 "enable": false 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_enable_histogram", 00:25:53.956 "params": { 00:25:53.956 "name": "nvme0n1", 00:25:53.956 "enable": true 00:25:53.956 } 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "method": "bdev_wait_for_examine" 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }, 00:25:53.956 { 00:25:53.956 "subsystem": "nbd", 00:25:53.956 "config": [] 00:25:53.956 } 00:25:53.956 ] 00:25:53.956 }' 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 693215 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 693215 ']' 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 693215 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 693215 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 693215' 00:25:53.956 killing process with pid 693215 00:25:53.956 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 693215 00:25:53.956 Received shutdown signal, test time was about 1.000000 seconds 00:25:53.956 00:25:53.956 Latency(us) 00:25:53.956 [2024-11-05T11:40:23.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.957 [2024-11-05T11:40:23.195Z] =================================================================================================================== 00:25:53.957 [2024-11-05T11:40:23.195Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.957 12:40:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 693215 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 693069 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 693069 ']' 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 693069 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 693069 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 693069' 00:25:53.957 killing process with pid 693069 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 693069 00:25:53.957 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 693069 00:25:54.214 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:54.214 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.214 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:54.214 "subsystems": [ 00:25:54.214 { 00:25:54.214 "subsystem": "keyring", 00:25:54.214 "config": [ 00:25:54.214 { 00:25:54.214 "method": "keyring_file_add_key", 00:25:54.214 "params": { 00:25:54.214 "name": "key0", 00:25:54.214 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:54.214 } 00:25:54.214 } 00:25:54.214 ] 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "subsystem": "iobuf", 00:25:54.214 "config": [ 00:25:54.214 { 00:25:54.214 "method": "iobuf_set_options", 00:25:54.214 "params": { 00:25:54.214 "small_pool_count": 8192, 00:25:54.214 "large_pool_count": 1024, 00:25:54.214 "small_bufsize": 8192, 00:25:54.214 "large_bufsize": 135168, 00:25:54.214 "enable_numa": false 00:25:54.214 } 00:25:54.214 } 00:25:54.214 ] 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "subsystem": "sock", 00:25:54.214 "config": [ 00:25:54.214 { 00:25:54.214 "method": "sock_set_default_impl", 00:25:54.214 "params": { 00:25:54.214 "impl_name": "posix" 00:25:54.214 } 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "method": "sock_impl_set_options", 00:25:54.214 "params": { 00:25:54.214 "impl_name": "ssl", 00:25:54.214 "recv_buf_size": 4096, 00:25:54.214 "send_buf_size": 4096, 00:25:54.214 "enable_recv_pipe": true, 00:25:54.214 "enable_quickack": false, 00:25:54.214 "enable_placement_id": 0, 00:25:54.214 "enable_zerocopy_send_server": true, 00:25:54.214 "enable_zerocopy_send_client": false, 00:25:54.214 "zerocopy_threshold": 0, 00:25:54.214 "tls_version": 0, 00:25:54.214 "enable_ktls": false 00:25:54.214 } 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "method": "sock_impl_set_options", 00:25:54.214 "params": { 00:25:54.214 "impl_name": "posix", 00:25:54.214 "recv_buf_size": 2097152, 00:25:54.214 "send_buf_size": 2097152, 00:25:54.214 "enable_recv_pipe": true, 00:25:54.214 "enable_quickack": false, 00:25:54.214 "enable_placement_id": 0, 00:25:54.214 "enable_zerocopy_send_server": true, 00:25:54.214 "enable_zerocopy_send_client": false, 00:25:54.214 "zerocopy_threshold": 0, 00:25:54.214 "tls_version": 0, 00:25:54.214 "enable_ktls": false 00:25:54.214 } 00:25:54.214 } 00:25:54.214 ] 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "subsystem": "vmd", 00:25:54.214 "config": [] 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "subsystem": "accel", 00:25:54.214 "config": [ 00:25:54.214 { 00:25:54.214 "method": "accel_set_options", 00:25:54.214 "params": { 00:25:54.214 "small_cache_size": 128, 00:25:54.214 "large_cache_size": 16, 00:25:54.214 "task_count": 2048, 00:25:54.214 "sequence_count": 2048, 00:25:54.214 "buf_count": 2048 00:25:54.214 } 00:25:54.214 } 00:25:54.214 ] 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "subsystem": "bdev", 00:25:54.214 "config": [ 00:25:54.214 { 00:25:54.214 "method": "bdev_set_options", 00:25:54.214 "params": { 00:25:54.214 "bdev_io_pool_size": 65535, 00:25:54.214 "bdev_io_cache_size": 256, 00:25:54.214 "bdev_auto_examine": true, 00:25:54.214 "iobuf_small_cache_size": 128, 00:25:54.214 "iobuf_large_cache_size": 16 00:25:54.214 } 00:25:54.214 }, 00:25:54.214 { 00:25:54.214 "method": "bdev_raid_set_options", 00:25:54.214 "params": { 00:25:54.214 "process_window_size_kb": 1024, 00:25:54.214 "process_max_bandwidth_mb_sec": 0 00:25:54.214 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "bdev_iscsi_set_options", 00:25:54.215 "params": { 00:25:54.215 "timeout_sec": 30 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "bdev_nvme_set_options", 00:25:54.215 "params": { 00:25:54.215 "action_on_timeout": "none", 00:25:54.215 "timeout_us": 0, 00:25:54.215 "timeout_admin_us": 0, 00:25:54.215 "keep_alive_timeout_ms": 10000, 00:25:54.215 "arbitration_burst": 0, 00:25:54.215 "low_priority_weight": 0, 00:25:54.215 "medium_priority_weight": 0, 00:25:54.215 "high_priority_weight": 0, 00:25:54.215 "nvme_adminq_poll_period_us": 10000, 00:25:54.215 "nvme_ioq_poll_period_us": 0, 00:25:54.215 "io_queue_requests": 0, 00:25:54.215 "delay_cmd_submit": true, 00:25:54.215 "transport_retry_count": 4, 00:25:54.215 "bdev_retry_count": 3, 00:25:54.215 "transport_ack_timeout": 0, 00:25:54.215 "ctrlr_loss_timeout_sec": 0, 00:25:54.215 "reconnect_delay_sec": 0, 00:25:54.215 "fast_io_fail_timeout_sec": 0, 00:25:54.215 "disable_auto_failback": false, 00:25:54.215 "generate_uuids": false, 00:25:54.215 "transport_tos": 0, 00:25:54.215 "nvme_error_stat": false, 00:25:54.215 "rdma_srq_size": 0, 00:25:54.215 "io_path_stat": false, 00:25:54.215 "allow_accel_sequence": false, 00:25:54.215 "rdma_max_cq_size": 0, 00:25:54.215 "rdma_cm_event_timeout_ms": 0, 00:25:54.215 "dhchap_digests": [ 00:25:54.215 "sha256", 00:25:54.215 "sha384", 00:25:54.215 "sha512" 00:25:54.215 ], 00:25:54.215 "dhchap_dhgroups": [ 00:25:54.215 "null", 00:25:54.215 "ffdhe2048", 00:25:54.215 "ffdhe3072", 00:25:54.215 "ffdhe4096", 00:25:54.215 "ffdhe6144", 00:25:54.215 "ffdhe8192" 00:25:54.215 ] 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "bdev_nvme_set_hotplug", 00:25:54.215 "params": { 00:25:54.215 "period_us": 100000, 00:25:54.215 "enable": false 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "bdev_malloc_create", 00:25:54.215 "params": { 00:25:54.215 "name": "malloc0", 00:25:54.215 "num_blocks": 8192, 00:25:54.215 "block_size": 4096, 00:25:54.215 "physical_block_size": 4096, 00:25:54.215 "uuid": "d2b9bc1e-6f19-4364-a400-7650f74824d0", 00:25:54.215 "optimal_io_boundary": 0, 00:25:54.215 "md_size": 0, 00:25:54.215 "dif_type": 0, 00:25:54.215 "dif_is_head_of_md": false, 00:25:54.215 "dif_pi_format": 0 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "bdev_wait_for_examine" 00:25:54.215 } 00:25:54.215 ] 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "subsystem": "nbd", 00:25:54.215 "config": [] 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "subsystem": "scheduler", 00:25:54.215 "config": [ 00:25:54.215 { 00:25:54.215 "method": "framework_set_scheduler", 00:25:54.215 "params": { 00:25:54.215 "name": "static" 00:25:54.215 } 00:25:54.215 } 00:25:54.215 ] 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "subsystem": "nvmf", 00:25:54.215 "config": [ 00:25:54.215 { 00:25:54.215 "method": "nvmf_set_config", 00:25:54.215 "params": { 00:25:54.215 "discovery_filter": "match_any", 00:25:54.215 "admin_cmd_passthru": { 00:25:54.215 "identify_ctrlr": false 00:25:54.215 }, 00:25:54.215 "dhchap_digests": [ 00:25:54.215 "sha256", 00:25:54.215 "sha384", 00:25:54.215 "sha512" 00:25:54.215 ], 00:25:54.215 "dhchap_dhgroups": [ 00:25:54.215 "null", 00:25:54.215 "ffdhe2048", 00:25:54.215 "ffdhe3072", 00:25:54.215 "ffdhe4096", 00:25:54.215 "ffdhe6144", 00:25:54.215 "ffdhe8192" 00:25:54.215 ] 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_set_max_subsystems", 00:25:54.215 "params": { 00:25:54.215 "max_subsystems": 1024 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_set_crdt", 00:25:54.215 "params": { 00:25:54.215 "crdt1": 0, 00:25:54.215 "crdt2": 0, 00:25:54.215 "crdt3": 0 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_create_transport", 00:25:54.215 "params": { 00:25:54.215 "trtype": "TCP", 00:25:54.215 "max_queue_depth": 128, 00:25:54.215 "max_io_qpairs_per_ctrlr": 127, 00:25:54.215 "in_capsule_data_size": 4096, 00:25:54.215 "max_io_size": 131072, 00:25:54.215 "io_unit_size": 131072, 00:25:54.215 "max_aq_depth": 128, 00:25:54.215 "num_shared_buffers": 511, 00:25:54.215 "buf_cache_size": 4294967295, 00:25:54.215 "dif_insert_or_strip": false, 00:25:54.215 "zcopy": false, 00:25:54.215 "c2h_success": false, 00:25:54.215 "sock_priority": 0, 00:25:54.215 "abort_timeout_sec": 1, 00:25:54.215 "ack_timeout": 0, 00:25:54.215 "data_wr_pool_size": 0 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_create_subsystem", 00:25:54.215 "params": { 00:25:54.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.215 "allow_any_host": false, 00:25:54.215 "serial_number": "00000000000000000000", 00:25:54.215 "model_number": "SPDK bdev Controller", 00:25:54.215 "max_namespaces": 32, 00:25:54.215 "min_cntlid": 1, 00:25:54.215 "max_cntlid": 65519, 00:25:54.215 "ana_reporting": false 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_subsystem_add_host", 00:25:54.215 "params": { 00:25:54.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.215 "host": "nqn.2016-06.io.spdk:host1", 00:25:54.215 "psk": "key0" 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_subsystem_add_ns", 00:25:54.215 "params": { 00:25:54.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.215 "namespace": { 00:25:54.215 "nsid": 1, 00:25:54.215 "bdev_name": "malloc0", 00:25:54.215 "nguid": "D2B9BC1E6F194364A4007650F74824D0", 00:25:54.215 "uuid": "d2b9bc1e-6f19-4364-a400-7650f74824d0", 00:25:54.215 "no_auto_visible": false 00:25:54.215 } 00:25:54.215 } 00:25:54.215 }, 00:25:54.215 { 00:25:54.215 "method": "nvmf_subsystem_add_listener", 00:25:54.215 "params": { 00:25:54.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.215 "listen_address": { 00:25:54.215 "trtype": "TCP", 00:25:54.215 "adrfam": "IPv4", 00:25:54.215 "traddr": "10.0.0.2", 00:25:54.215 "trsvcid": "4420" 00:25:54.215 }, 00:25:54.215 "secure_channel": false, 00:25:54.215 "sock_impl": "ssl" 00:25:54.215 } 00:25:54.215 } 00:25:54.215 ] 00:25:54.215 } 00:25:54.215 ] 00:25:54.215 }' 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=693505 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 693505 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 693505 ']' 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.215 12:40:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:54.215 [2024-11-05 12:40:23.414591] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:54.215 [2024-11-05 12:40:23.414662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.473 [2024-11-05 12:40:23.485946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.473 [2024-11-05 12:40:23.530672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.473 [2024-11-05 12:40:23.530740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.473 [2024-11-05 12:40:23.530754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.473 [2024-11-05 12:40:23.530768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.473 [2024-11-05 12:40:23.530778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.473 [2024-11-05 12:40:23.531462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.731 [2024-11-05 12:40:23.768643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.731 [2024-11-05 12:40:23.800713] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:54.731 [2024-11-05 12:40:23.800979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=693653 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 693653 /var/tmp/bdevperf.sock 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 693653 ']' 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:55.296 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:55.296 "subsystems": [ 00:25:55.296 { 00:25:55.296 "subsystem": "keyring", 00:25:55.296 "config": [ 00:25:55.296 { 00:25:55.296 "method": "keyring_file_add_key", 00:25:55.296 "params": { 00:25:55.296 "name": "key0", 00:25:55.296 "path": "/tmp/tmp.o9qx4O8mZV" 00:25:55.296 } 00:25:55.296 } 00:25:55.296 ] 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "subsystem": "iobuf", 00:25:55.296 "config": [ 00:25:55.296 { 00:25:55.296 "method": "iobuf_set_options", 00:25:55.296 "params": { 00:25:55.296 "small_pool_count": 8192, 00:25:55.296 "large_pool_count": 1024, 00:25:55.296 "small_bufsize": 8192, 00:25:55.296 "large_bufsize": 135168, 00:25:55.296 "enable_numa": false 00:25:55.296 } 00:25:55.296 } 00:25:55.296 ] 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "subsystem": "sock", 00:25:55.296 "config": [ 00:25:55.296 { 00:25:55.296 "method": "sock_set_default_impl", 00:25:55.296 "params": { 00:25:55.296 "impl_name": "posix" 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "sock_impl_set_options", 00:25:55.296 "params": { 00:25:55.296 "impl_name": "ssl", 00:25:55.296 "recv_buf_size": 4096, 00:25:55.296 "send_buf_size": 4096, 00:25:55.296 "enable_recv_pipe": true, 00:25:55.296 "enable_quickack": false, 00:25:55.296 "enable_placement_id": 0, 00:25:55.296 "enable_zerocopy_send_server": true, 00:25:55.296 "enable_zerocopy_send_client": false, 00:25:55.296 "zerocopy_threshold": 0, 00:25:55.296 "tls_version": 0, 00:25:55.296 "enable_ktls": false 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "sock_impl_set_options", 00:25:55.296 "params": { 00:25:55.296 "impl_name": "posix", 00:25:55.296 "recv_buf_size": 2097152, 00:25:55.296 "send_buf_size": 2097152, 00:25:55.296 "enable_recv_pipe": true, 00:25:55.296 "enable_quickack": false, 00:25:55.296 "enable_placement_id": 0, 00:25:55.296 "enable_zerocopy_send_server": true, 00:25:55.296 "enable_zerocopy_send_client": false, 00:25:55.296 "zerocopy_threshold": 0, 00:25:55.296 "tls_version": 0, 00:25:55.296 "enable_ktls": false 00:25:55.296 } 00:25:55.296 } 00:25:55.296 ] 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "subsystem": "vmd", 00:25:55.296 "config": [] 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "subsystem": "accel", 00:25:55.296 "config": [ 00:25:55.296 { 00:25:55.296 "method": "accel_set_options", 00:25:55.296 "params": { 00:25:55.296 "small_cache_size": 128, 00:25:55.296 "large_cache_size": 16, 00:25:55.296 "task_count": 2048, 00:25:55.296 "sequence_count": 2048, 00:25:55.296 "buf_count": 2048 00:25:55.296 } 00:25:55.296 } 00:25:55.296 ] 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "subsystem": "bdev", 00:25:55.296 "config": [ 00:25:55.296 { 00:25:55.296 "method": "bdev_set_options", 00:25:55.296 "params": { 00:25:55.296 "bdev_io_pool_size": 65535, 00:25:55.296 "bdev_io_cache_size": 256, 00:25:55.296 "bdev_auto_examine": true, 00:25:55.296 "iobuf_small_cache_size": 128, 00:25:55.296 "iobuf_large_cache_size": 16 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "bdev_raid_set_options", 00:25:55.296 "params": { 00:25:55.296 "process_window_size_kb": 1024, 00:25:55.296 "process_max_bandwidth_mb_sec": 0 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "bdev_iscsi_set_options", 00:25:55.296 "params": { 00:25:55.296 "timeout_sec": 30 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "bdev_nvme_set_options", 00:25:55.296 "params": { 00:25:55.296 "action_on_timeout": "none", 00:25:55.296 "timeout_us": 0, 00:25:55.296 "timeout_admin_us": 0, 00:25:55.296 "keep_alive_timeout_ms": 10000, 00:25:55.296 "arbitration_burst": 0, 00:25:55.296 "low_priority_weight": 0, 00:25:55.296 "medium_priority_weight": 0, 00:25:55.296 "high_priority_weight": 0, 00:25:55.296 "nvme_adminq_poll_period_us": 10000, 00:25:55.296 "nvme_ioq_poll_period_us": 0, 00:25:55.296 "io_queue_requests": 512, 00:25:55.296 "delay_cmd_submit": true, 00:25:55.296 "transport_retry_count": 4, 00:25:55.296 "bdev_retry_count": 3, 00:25:55.296 "transport_ack_timeout": 0, 00:25:55.296 "ctrlr_loss_timeout_sec": 0, 00:25:55.296 "reconnect_delay_sec": 0, 00:25:55.296 "fast_io_fail_timeout_sec": 0, 00:25:55.296 "disable_auto_failback": false, 00:25:55.296 "generate_uuids": false, 00:25:55.296 "transport_tos": 0, 00:25:55.296 "nvme_error_stat": false, 00:25:55.296 "rdma_srq_size": 0, 00:25:55.296 "io_path_stat": false, 00:25:55.296 "allow_accel_sequence": false, 00:25:55.296 "rdma_max_cq_size": 0, 00:25:55.296 "rdma_cm_event_timeout_ms": 0, 00:25:55.296 "dhchap_digests": [ 00:25:55.296 "sha256", 00:25:55.296 "sha384", 00:25:55.296 "sha512" 00:25:55.296 ], 00:25:55.296 "dhchap_dhgroups": [ 00:25:55.296 "null", 00:25:55.296 "ffdhe2048", 00:25:55.296 "ffdhe3072", 00:25:55.296 "ffdhe4096", 00:25:55.296 "ffdhe6144", 00:25:55.296 "ffdhe8192" 00:25:55.296 ] 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "bdev_nvme_attach_controller", 00:25:55.296 "params": { 00:25:55.296 "name": "nvme0", 00:25:55.296 "trtype": "TCP", 00:25:55.296 "adrfam": "IPv4", 00:25:55.296 "traddr": "10.0.0.2", 00:25:55.296 "trsvcid": "4420", 00:25:55.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.296 "prchk_reftag": false, 00:25:55.296 "prchk_guard": false, 00:25:55.296 "ctrlr_loss_timeout_sec": 0, 00:25:55.296 "reconnect_delay_sec": 0, 00:25:55.296 "fast_io_fail_timeout_sec": 0, 00:25:55.296 "psk": "key0", 00:25:55.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.296 "hdgst": false, 00:25:55.296 "ddgst": false, 00:25:55.296 "multipath": "multipath" 00:25:55.296 } 00:25:55.296 }, 00:25:55.296 { 00:25:55.296 "method": "bdev_nvme_set_hotplug", 00:25:55.296 "params": { 00:25:55.296 "period_us": 100000, 00:25:55.297 "enable": false 00:25:55.297 } 00:25:55.297 }, 00:25:55.297 { 00:25:55.297 "method": "bdev_enable_histogram", 00:25:55.297 "params": { 00:25:55.297 "name": "nvme0n1", 00:25:55.297 "enable": true 00:25:55.297 } 00:25:55.297 }, 00:25:55.297 { 00:25:55.297 "method": "bdev_wait_for_examine" 00:25:55.297 } 00:25:55.297 ] 00:25:55.297 }, 00:25:55.297 { 00:25:55.297 "subsystem": "nbd", 00:25:55.297 "config": [] 00:25:55.297 } 00:25:55.297 ] 00:25:55.297 }' 00:25:55.297 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.297 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:55.297 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.554 [2024-11-05 12:40:24.544518] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:25:55.554 [2024-11-05 12:40:24.544610] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693653 ] 00:25:55.554 [2024-11-05 12:40:24.611691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.554 [2024-11-05 12:40:24.657674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.812 [2024-11-05 12:40:24.836237] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.812 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.812 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:25:55.812 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:55.812 12:40:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:56.070 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.070 12:40:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:56.327 Running I/O for 1 seconds... 00:25:57.259 3563.00 IOPS, 13.92 MiB/s 00:25:57.259 Latency(us) 00:25:57.259 [2024-11-05T11:40:26.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.259 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:57.259 Verification LBA range: start 0x0 length 0x2000 00:25:57.259 nvme0n1 : 1.03 3577.28 13.97 0.00 0.00 35319.38 6456.51 38059.43 00:25:57.259 [2024-11-05T11:40:26.497Z] =================================================================================================================== 00:25:57.259 [2024-11-05T11:40:26.497Z] Total : 3577.28 13.97 0.00 0.00 35319.38 6456.51 38059.43 00:25:57.259 { 00:25:57.259 "results": [ 00:25:57.259 { 00:25:57.259 "job": "nvme0n1", 00:25:57.259 "core_mask": "0x2", 00:25:57.259 "workload": "verify", 00:25:57.259 "status": "finished", 00:25:57.259 "verify_range": { 00:25:57.259 "start": 0, 00:25:57.259 "length": 8192 00:25:57.259 }, 00:25:57.259 "queue_depth": 128, 00:25:57.259 "io_size": 4096, 00:25:57.259 "runtime": 1.03179, 00:25:57.259 "iops": 3577.278322139195, 00:25:57.259 "mibps": 13.97374344585623, 00:25:57.259 "io_failed": 0, 00:25:57.259 "io_timeout": 0, 00:25:57.259 "avg_latency_us": 35319.37763167665, 00:25:57.259 "min_latency_us": 6456.50962962963, 00:25:57.259 "max_latency_us": 38059.42518518519 00:25:57.259 } 00:25:57.259 ], 00:25:57.259 "core_count": 1 00:25:57.259 } 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:57.259 nvmf_trace.0 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 693653 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 693653 ']' 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 693653 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 693653 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 693653' 00:25:57.259 killing process with pid 693653 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 693653 00:25:57.259 Received shutdown signal, test time was about 1.000000 seconds 00:25:57.259 00:25:57.259 Latency(us) 00:25:57.259 [2024-11-05T11:40:26.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.259 [2024-11-05T11:40:26.497Z] =================================================================================================================== 00:25:57.259 [2024-11-05T11:40:26.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.259 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 693653 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:57.517 rmmod nvme_tcp 00:25:57.517 rmmod nvme_fabrics 00:25:57.517 rmmod nvme_keyring 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 693505 ']' 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 693505 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 693505 ']' 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 693505 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:57.517 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 693505 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 693505' 00:25:57.777 killing process with pid 693505 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 693505 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 693505 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.777 12:40:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.309 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:00.309 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9hHd8IWT2o /tmp/tmp.2yGdlNp4Ol /tmp/tmp.o9qx4O8mZV 00:26:00.309 00:26:00.309 real 1m22.269s 00:26:00.309 user 2m15.021s 00:26:00.309 sys 0m26.064s 00:26:00.309 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:00.309 12:40:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.309 ************************************ 00:26:00.309 END TEST nvmf_tls 00:26:00.309 ************************************ 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.309 ************************************ 00:26:00.309 START TEST nvmf_fips 00:26:00.309 ************************************ 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:00.309 * Looking for test storage... 00:26:00.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:00.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.309 --rc genhtml_branch_coverage=1 00:26:00.309 --rc genhtml_function_coverage=1 00:26:00.309 --rc genhtml_legend=1 00:26:00.309 --rc geninfo_all_blocks=1 00:26:00.309 --rc geninfo_unexecuted_blocks=1 00:26:00.309 00:26:00.309 ' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:00.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.309 --rc genhtml_branch_coverage=1 00:26:00.309 --rc genhtml_function_coverage=1 00:26:00.309 --rc genhtml_legend=1 00:26:00.309 --rc geninfo_all_blocks=1 00:26:00.309 --rc geninfo_unexecuted_blocks=1 00:26:00.309 00:26:00.309 ' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:00.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.309 --rc genhtml_branch_coverage=1 00:26:00.309 --rc genhtml_function_coverage=1 00:26:00.309 --rc genhtml_legend=1 00:26:00.309 --rc geninfo_all_blocks=1 00:26:00.309 --rc geninfo_unexecuted_blocks=1 00:26:00.309 00:26:00.309 ' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:00.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.309 --rc genhtml_branch_coverage=1 00:26:00.309 --rc genhtml_function_coverage=1 00:26:00.309 --rc genhtml_legend=1 00:26:00.309 --rc geninfo_all_blocks=1 00:26:00.309 --rc geninfo_unexecuted_blocks=1 00:26:00.309 00:26:00.309 ' 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.309 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:00.310 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:26:00.311 Error setting digest 00:26:00.311 4092824F067F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:26:00.311 4092824F067F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.311 12:40:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:02.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:02.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.848 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:02.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:02.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:26:02.849 00:26:02.849 --- 10.0.0.2 ping statistics --- 00:26:02.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.849 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:26:02.849 00:26:02.849 --- 10.0.0.1 ping statistics --- 00:26:02.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.849 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=695891 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 695891 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 695891 ']' 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:02.849 12:40:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:02.849 [2024-11-05 12:40:31.777603] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:26:02.849 [2024-11-05 12:40:31.777696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.849 [2024-11-05 12:40:31.857881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.849 [2024-11-05 12:40:31.904169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.849 [2024-11-05 12:40:31.904227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.849 [2024-11-05 12:40:31.904250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.849 [2024-11-05 12:40:31.904261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.849 [2024-11-05 12:40:31.904270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.849 [2024-11-05 12:40:31.904812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.PBD 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.PBD 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.PBD 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.PBD 00:26:02.849 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:03.107 [2024-11-05 12:40:32.290469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.107 [2024-11-05 12:40:32.306460] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:03.107 [2024-11-05 12:40:32.306681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.365 malloc0 00:26:03.365 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:03.365 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=696045 00:26:03.365 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:03.365 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 696045 /var/tmp/bdevperf.sock 00:26:03.365 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 696045 ']' 00:26:03.366 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.366 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:03.366 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.366 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:03.366 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:03.366 [2024-11-05 12:40:32.431714] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:26:03.366 [2024-11-05 12:40:32.431789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696045 ] 00:26:03.366 [2024-11-05 12:40:32.497058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.366 [2024-11-05 12:40:32.541873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.623 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:03.623 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:26:03.623 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.PBD 00:26:03.881 12:40:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:04.138 [2024-11-05 12:40:33.169921] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:04.138 TLSTESTn1 00:26:04.138 12:40:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:04.138 Running I/O for 10 seconds... 00:26:06.441 2988.00 IOPS, 11.67 MiB/s [2024-11-05T11:40:36.611Z] 3125.00 IOPS, 12.21 MiB/s [2024-11-05T11:40:37.543Z] 3161.67 IOPS, 12.35 MiB/s [2024-11-05T11:40:38.544Z] 3174.00 IOPS, 12.40 MiB/s [2024-11-05T11:40:39.476Z] 3189.60 IOPS, 12.46 MiB/s [2024-11-05T11:40:40.409Z] 3206.17 IOPS, 12.52 MiB/s [2024-11-05T11:40:41.780Z] 3218.29 IOPS, 12.57 MiB/s [2024-11-05T11:40:42.711Z] 3220.00 IOPS, 12.58 MiB/s [2024-11-05T11:40:43.748Z] 3214.67 IOPS, 12.56 MiB/s [2024-11-05T11:40:43.749Z] 3222.00 IOPS, 12.59 MiB/s 00:26:14.511 Latency(us) 00:26:14.511 [2024-11-05T11:40:43.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.511 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:14.511 Verification LBA range: start 0x0 length 0x2000 00:26:14.511 TLSTESTn1 : 10.02 3228.95 12.61 0.00 0.00 39577.24 6262.33 51652.08 00:26:14.511 [2024-11-05T11:40:43.749Z] =================================================================================================================== 00:26:14.511 [2024-11-05T11:40:43.749Z] Total : 3228.95 12.61 0.00 0.00 39577.24 6262.33 51652.08 00:26:14.511 { 00:26:14.511 "results": [ 00:26:14.511 { 00:26:14.511 "job": "TLSTESTn1", 00:26:14.511 "core_mask": "0x4", 00:26:14.511 "workload": "verify", 00:26:14.511 "status": "finished", 00:26:14.511 "verify_range": { 00:26:14.511 "start": 0, 00:26:14.511 "length": 8192 00:26:14.511 }, 00:26:14.511 "queue_depth": 128, 00:26:14.511 "io_size": 4096, 00:26:14.511 "runtime": 10.017203, 00:26:14.511 "iops": 3228.9452454941766, 00:26:14.511 "mibps": 12.613067365211627, 00:26:14.511 "io_failed": 0, 00:26:14.511 "io_timeout": 0, 00:26:14.511 "avg_latency_us": 39577.240791237986, 00:26:14.511 "min_latency_us": 6262.328888888889, 00:26:14.511 "max_latency_us": 51652.07703703704 00:26:14.511 } 00:26:14.511 ], 00:26:14.511 "core_count": 1 00:26:14.511 } 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:14.511 nvmf_trace.0 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 696045 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 696045 ']' 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 696045 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 696045 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 696045' 00:26:14.511 killing process with pid 696045 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 696045 00:26:14.511 Received shutdown signal, test time was about 10.000000 seconds 00:26:14.511 00:26:14.511 Latency(us) 00:26:14.511 [2024-11-05T11:40:43.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.511 [2024-11-05T11:40:43.749Z] =================================================================================================================== 00:26:14.511 [2024-11-05T11:40:43.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 696045 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.511 rmmod nvme_tcp 00:26:14.511 rmmod nvme_fabrics 00:26:14.511 rmmod nvme_keyring 00:26:14.511 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 695891 ']' 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 695891 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 695891 ']' 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 695891 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 695891 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 695891' 00:26:14.769 killing process with pid 695891 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 695891 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 695891 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.769 12:40:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.769 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.769 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:14.769 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.769 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.769 12:40:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.PBD 00:26:17.301 00:26:17.301 real 0m16.999s 00:26:17.301 user 0m18.768s 00:26:17.301 sys 0m6.972s 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:17.301 ************************************ 00:26:17.301 END TEST nvmf_fips 00:26:17.301 ************************************ 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:17.301 ************************************ 00:26:17.301 START TEST nvmf_control_msg_list 00:26:17.301 ************************************ 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:17.301 * Looking for test storage... 00:26:17.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.301 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:17.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.302 --rc genhtml_branch_coverage=1 00:26:17.302 --rc genhtml_function_coverage=1 00:26:17.302 --rc genhtml_legend=1 00:26:17.302 --rc geninfo_all_blocks=1 00:26:17.302 --rc geninfo_unexecuted_blocks=1 00:26:17.302 00:26:17.302 ' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:17.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.302 --rc genhtml_branch_coverage=1 00:26:17.302 --rc genhtml_function_coverage=1 00:26:17.302 --rc genhtml_legend=1 00:26:17.302 --rc geninfo_all_blocks=1 00:26:17.302 --rc geninfo_unexecuted_blocks=1 00:26:17.302 00:26:17.302 ' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:17.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.302 --rc genhtml_branch_coverage=1 00:26:17.302 --rc genhtml_function_coverage=1 00:26:17.302 --rc genhtml_legend=1 00:26:17.302 --rc geninfo_all_blocks=1 00:26:17.302 --rc geninfo_unexecuted_blocks=1 00:26:17.302 00:26:17.302 ' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:17.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.302 --rc genhtml_branch_coverage=1 00:26:17.302 --rc genhtml_function_coverage=1 00:26:17.302 --rc genhtml_legend=1 00:26:17.302 --rc geninfo_all_blocks=1 00:26:17.302 --rc geninfo_unexecuted_blocks=1 00:26:17.302 00:26:17.302 ' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.302 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:26:17.303 12:40:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:19.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:19.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:19.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:19.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:19.831 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:19.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:26:19.832 00:26:19.832 --- 10.0.0.2 ping statistics --- 00:26:19.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.832 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:26:19.832 00:26:19.832 --- 10.0.0.1 ping statistics --- 00:26:19.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.832 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=699309 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 699309 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 699309 ']' 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 [2024-11-05 12:40:48.717136] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:26:19.832 [2024-11-05 12:40:48.717229] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.832 [2024-11-05 12:40:48.788331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.832 [2024-11-05 12:40:48.831849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.832 [2024-11-05 12:40:48.831907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.832 [2024-11-05 12:40:48.831933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.832 [2024-11-05 12:40:48.831944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.832 [2024-11-05 12:40:48.831955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.832 [2024-11-05 12:40:48.832544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 [2024-11-05 12:40:48.966555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 Malloc0 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.832 12:40:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:19.832 [2024-11-05 12:40:49.005896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=699400 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=699402 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=699404 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:19.832 12:40:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 699400 00:26:20.090 [2024-11-05 12:40:49.084954] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:20.090 [2024-11-05 12:40:49.085247] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:20.090 [2024-11-05 12:40:49.085507] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:21.022 Initializing NVMe Controllers 00:26:21.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:21.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:21.022 Initialization complete. Launching workers. 00:26:21.022 ======================================================== 00:26:21.022 Latency(us) 00:26:21.022 Device Information : IOPS MiB/s Average min max 00:26:21.022 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4134.00 16.15 241.44 155.00 567.46 00:26:21.022 ======================================================== 00:26:21.022 Total : 4134.00 16.15 241.44 155.00 567.46 00:26:21.022 00:26:21.022 Initializing NVMe Controllers 00:26:21.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:21.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:21.022 Initialization complete. Launching workers. 00:26:21.022 ======================================================== 00:26:21.022 Latency(us) 00:26:21.022 Device Information : IOPS MiB/s Average min max 00:26:21.022 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 126.00 0.49 7983.33 195.88 40973.90 00:26:21.022 ======================================================== 00:26:21.022 Total : 126.00 0.49 7983.33 195.88 40973.90 00:26:21.022 00:26:21.022 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 699402 00:26:21.280 Initializing NVMe Controllers 00:26:21.280 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:21.280 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:21.280 Initialization complete. Launching workers. 00:26:21.280 ======================================================== 00:26:21.280 Latency(us) 00:26:21.280 Device Information : IOPS MiB/s Average min max 00:26:21.280 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3491.00 13.64 285.98 152.75 41160.55 00:26:21.280 ======================================================== 00:26:21.280 Total : 3491.00 13.64 285.98 152.75 41160.55 00:26:21.280 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 699404 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:21.280 rmmod nvme_tcp 00:26:21.280 rmmod nvme_fabrics 00:26:21.280 rmmod nvme_keyring 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 699309 ']' 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 699309 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 699309 ']' 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 699309 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 699309 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 699309' 00:26:21.280 killing process with pid 699309 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 699309 00:26:21.280 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 699309 00:26:21.537 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.538 12:40:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.437 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:23.438 00:26:23.438 real 0m6.525s 00:26:23.438 user 0m5.669s 00:26:23.438 sys 0m2.748s 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:23.438 ************************************ 00:26:23.438 END TEST nvmf_control_msg_list 00:26:23.438 ************************************ 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:23.438 ************************************ 00:26:23.438 START TEST nvmf_wait_for_buf 00:26:23.438 ************************************ 00:26:23.438 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:23.696 * Looking for test storage... 00:26:23.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:23.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.696 --rc genhtml_branch_coverage=1 00:26:23.696 --rc genhtml_function_coverage=1 00:26:23.696 --rc genhtml_legend=1 00:26:23.696 --rc geninfo_all_blocks=1 00:26:23.696 --rc geninfo_unexecuted_blocks=1 00:26:23.696 00:26:23.696 ' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:23.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.696 --rc genhtml_branch_coverage=1 00:26:23.696 --rc genhtml_function_coverage=1 00:26:23.696 --rc genhtml_legend=1 00:26:23.696 --rc geninfo_all_blocks=1 00:26:23.696 --rc geninfo_unexecuted_blocks=1 00:26:23.696 00:26:23.696 ' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:23.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.696 --rc genhtml_branch_coverage=1 00:26:23.696 --rc genhtml_function_coverage=1 00:26:23.696 --rc genhtml_legend=1 00:26:23.696 --rc geninfo_all_blocks=1 00:26:23.696 --rc geninfo_unexecuted_blocks=1 00:26:23.696 00:26:23.696 ' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:23.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.696 --rc genhtml_branch_coverage=1 00:26:23.696 --rc genhtml_function_coverage=1 00:26:23.696 --rc genhtml_legend=1 00:26:23.696 --rc geninfo_all_blocks=1 00:26:23.696 --rc geninfo_unexecuted_blocks=1 00:26:23.696 00:26:23.696 ' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.696 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:23.697 12:40:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.227 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.228 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.228 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.228 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.228 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:26.228 12:40:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:26.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:26:26.228 00:26:26.228 --- 10.0.0.2 ping statistics --- 00:26:26.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.228 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:26:26.228 00:26:26.228 --- 10.0.0.1 ping statistics --- 00:26:26.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.228 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:26.228 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=701522 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 701522 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 701522 ']' 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.229 [2024-11-05 12:40:55.107958] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:26:26.229 [2024-11-05 12:40:55.108054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.229 [2024-11-05 12:40:55.182389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.229 [2024-11-05 12:40:55.229604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.229 [2024-11-05 12:40:55.229686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.229 [2024-11-05 12:40:55.229700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.229 [2024-11-05 12:40:55.229711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.229 [2024-11-05 12:40:55.229720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.229 [2024-11-05 12:40:55.230308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.229 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.487 Malloc0 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.487 [2024-11-05 12:40:55.478186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:26.487 [2024-11-05 12:40:55.502397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.487 12:40:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.487 [2024-11-05 12:40:55.581975] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:27.857 Initializing NVMe Controllers 00:26:27.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:27.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:27.857 Initialization complete. Launching workers. 00:26:27.857 ======================================================== 00:26:27.857 Latency(us) 00:26:27.857 Device Information : IOPS MiB/s Average min max 00:26:27.857 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 122.93 15.37 33704.76 7992.79 71825.98 00:26:27.857 ======================================================== 00:26:27.857 Total : 122.93 15.37 33704.76 7992.79 71825.98 00:26:27.857 00:26:27.857 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:27.857 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:27.857 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.857 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:27.857 12:40:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1942 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1942 -eq 0 ]] 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.857 rmmod nvme_tcp 00:26:27.857 rmmod nvme_fabrics 00:26:27.857 rmmod nvme_keyring 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 701522 ']' 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 701522 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 701522 ']' 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 701522 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:27.857 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 701522 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 701522' 00:26:28.117 killing process with pid 701522 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 701522 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 701522 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.117 12:40:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:30.647 00:26:30.647 real 0m6.683s 00:26:30.647 user 0m3.163s 00:26:30.647 sys 0m1.989s 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:30.647 ************************************ 00:26:30.647 END TEST nvmf_wait_for_buf 00:26:30.647 ************************************ 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:30.647 ************************************ 00:26:30.647 START TEST nvmf_fuzz 00:26:30.647 ************************************ 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:26:30.647 * Looking for test storage... 00:26:30.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.647 --rc genhtml_branch_coverage=1 00:26:30.647 --rc genhtml_function_coverage=1 00:26:30.647 --rc genhtml_legend=1 00:26:30.647 --rc geninfo_all_blocks=1 00:26:30.647 --rc geninfo_unexecuted_blocks=1 00:26:30.647 00:26:30.647 ' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.647 --rc genhtml_branch_coverage=1 00:26:30.647 --rc genhtml_function_coverage=1 00:26:30.647 --rc genhtml_legend=1 00:26:30.647 --rc geninfo_all_blocks=1 00:26:30.647 --rc geninfo_unexecuted_blocks=1 00:26:30.647 00:26:30.647 ' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.647 --rc genhtml_branch_coverage=1 00:26:30.647 --rc genhtml_function_coverage=1 00:26:30.647 --rc genhtml_legend=1 00:26:30.647 --rc geninfo_all_blocks=1 00:26:30.647 --rc geninfo_unexecuted_blocks=1 00:26:30.647 00:26:30.647 ' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.647 --rc genhtml_branch_coverage=1 00:26:30.647 --rc genhtml_function_coverage=1 00:26:30.647 --rc genhtml_legend=1 00:26:30.647 --rc geninfo_all_blocks=1 00:26:30.647 --rc geninfo_unexecuted_blocks=1 00:26:30.647 00:26:30.647 ' 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.647 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.648 12:40:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.546 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:32.547 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:32.547 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:32.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:32.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:26:32.547 00:26:32.547 --- 10.0.0.2 ping statistics --- 00:26:32.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.547 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:26:32.547 00:26:32.547 --- 10.0.0.1 ping statistics --- 00:26:32.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.547 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=703744 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 703744 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 703744 ']' 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:32.547 12:41:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 Malloc0 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:26:33.113 12:41:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:05.183 Fuzzing completed. Shutting down the fuzz application 00:27:05.183 00:27:05.183 Dumping successful admin opcodes: 00:27:05.183 8, 9, 10, 24, 00:27:05.183 Dumping successful io opcodes: 00:27:05.183 0, 9, 00:27:05.183 NS: 0x2000008eff00 I/O qp, Total commands completed: 505035, total successful commands: 2909, random_seed: 2592020224 00:27:05.183 NS: 0x2000008eff00 admin qp, Total commands completed: 60830, total successful commands: 482, random_seed: 65870272 00:27:05.183 12:41:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:05.183 Fuzzing completed. Shutting down the fuzz application 00:27:05.183 00:27:05.183 Dumping successful admin opcodes: 00:27:05.183 24, 00:27:05.183 Dumping successful io opcodes: 00:27:05.183 00:27:05.183 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2841308784 00:27:05.183 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2841425154 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.183 rmmod nvme_tcp 00:27:05.183 rmmod nvme_fabrics 00:27:05.183 rmmod nvme_keyring 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 703744 ']' 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 703744 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 703744 ']' 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 703744 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 703744 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 703744' 00:27:05.183 killing process with pid 703744 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 703744 00:27:05.183 12:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 703744 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.183 12:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:27:07.087 00:27:07.087 real 0m36.781s 00:27:07.087 user 0m50.891s 00:27:07.087 sys 0m14.846s 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:07.087 ************************************ 00:27:07.087 END TEST nvmf_fuzz 00:27:07.087 ************************************ 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:07.087 ************************************ 00:27:07.087 START TEST nvmf_multiconnection 00:27:07.087 ************************************ 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:07.087 * Looking for test storage... 00:27:07.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:27:07.087 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:27:07.345 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.346 --rc genhtml_branch_coverage=1 00:27:07.346 --rc genhtml_function_coverage=1 00:27:07.346 --rc genhtml_legend=1 00:27:07.346 --rc geninfo_all_blocks=1 00:27:07.346 --rc geninfo_unexecuted_blocks=1 00:27:07.346 00:27:07.346 ' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.346 --rc genhtml_branch_coverage=1 00:27:07.346 --rc genhtml_function_coverage=1 00:27:07.346 --rc genhtml_legend=1 00:27:07.346 --rc geninfo_all_blocks=1 00:27:07.346 --rc geninfo_unexecuted_blocks=1 00:27:07.346 00:27:07.346 ' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.346 --rc genhtml_branch_coverage=1 00:27:07.346 --rc genhtml_function_coverage=1 00:27:07.346 --rc genhtml_legend=1 00:27:07.346 --rc geninfo_all_blocks=1 00:27:07.346 --rc geninfo_unexecuted_blocks=1 00:27:07.346 00:27:07.346 ' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.346 --rc genhtml_branch_coverage=1 00:27:07.346 --rc genhtml_function_coverage=1 00:27:07.346 --rc genhtml_legend=1 00:27:07.346 --rc geninfo_all_blocks=1 00:27:07.346 --rc geninfo_unexecuted_blocks=1 00:27:07.346 00:27:07.346 ' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:07.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.346 12:41:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.276 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:09.277 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:09.277 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:09.277 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:09.277 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.277 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:27:09.559 00:27:09.559 --- 10.0.0.2 ping statistics --- 00:27:09.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.559 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:09.559 00:27:09.559 --- 10.0.0.1 ping statistics --- 00:27:09.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.559 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=709353 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 709353 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 709353 ']' 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:09.559 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.560 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:09.560 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.560 [2024-11-05 12:41:38.654169] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:27:09.560 [2024-11-05 12:41:38.654272] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.560 [2024-11-05 12:41:38.725743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.560 [2024-11-05 12:41:38.773232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.560 [2024-11-05 12:41:38.773282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.560 [2024-11-05 12:41:38.773310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.560 [2024-11-05 12:41:38.773321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.560 [2024-11-05 12:41:38.773331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.560 [2024-11-05 12:41:38.774874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.560 [2024-11-05 12:41:38.774929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.560 [2024-11-05 12:41:38.774997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.560 [2024-11-05 12:41:38.775000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 [2024-11-05 12:41:38.919696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 Malloc1 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 [2024-11-05 12:41:38.992801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 Malloc2 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.818 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.076 Malloc3 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.076 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 Malloc4 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 Malloc5 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 Malloc6 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 Malloc7 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.077 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.335 Malloc8 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.335 Malloc9 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.335 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 Malloc10 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 Malloc11 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.336 12:41:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:11.269 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:11.269 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:11.269 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:11.269 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:11.269 12:41:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:13.166 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:13.730 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:13.730 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:13.730 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:13.730 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:13.730 12:41:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:15.627 12:41:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:16.560 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:16.560 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:16.560 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:16.560 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:16.560 12:41:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.457 12:41:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:19.022 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:19.022 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:19.022 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:19.022 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:19.022 12:41:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:20.917 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:20.917 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:20.917 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:27:21.174 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:21.174 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.174 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:21.174 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:21.174 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:21.739 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:21.739 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:21.739 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:21.739 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:21.739 12:41:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.264 12:41:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:24.521 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:24.521 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:24.521 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:24.521 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:24.521 12:41:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.416 12:41:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:27.348 12:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:27.348 12:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:27.348 12:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:27.348 12:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:27.348 12:41:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.243 12:41:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:30.175 12:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:30.175 12:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:30.175 12:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:30.175 12:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:30.175 12:41:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:32.069 12:42:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:33.000 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:33.000 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:33.000 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:33.000 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:33.000 12:42:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:35.521 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:35.522 12:42:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:36.086 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:36.086 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:36.086 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.086 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:36.086 12:42:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:37.980 12:42:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:38.911 12:42:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:38.911 12:42:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:27:38.911 12:42:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:38.911 12:42:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:38.911 12:42:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:27:41.436 12:42:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:41.436 [global] 00:27:41.436 thread=1 00:27:41.436 invalidate=1 00:27:41.436 rw=read 00:27:41.436 time_based=1 00:27:41.436 runtime=10 00:27:41.436 ioengine=libaio 00:27:41.436 direct=1 00:27:41.436 bs=262144 00:27:41.436 iodepth=64 00:27:41.436 norandommap=1 00:27:41.436 numjobs=1 00:27:41.436 00:27:41.436 [job0] 00:27:41.436 filename=/dev/nvme0n1 00:27:41.436 [job1] 00:27:41.436 filename=/dev/nvme10n1 00:27:41.436 [job2] 00:27:41.436 filename=/dev/nvme1n1 00:27:41.436 [job3] 00:27:41.436 filename=/dev/nvme2n1 00:27:41.436 [job4] 00:27:41.436 filename=/dev/nvme3n1 00:27:41.436 [job5] 00:27:41.436 filename=/dev/nvme4n1 00:27:41.436 [job6] 00:27:41.436 filename=/dev/nvme5n1 00:27:41.436 [job7] 00:27:41.436 filename=/dev/nvme6n1 00:27:41.436 [job8] 00:27:41.436 filename=/dev/nvme7n1 00:27:41.436 [job9] 00:27:41.436 filename=/dev/nvme8n1 00:27:41.436 [job10] 00:27:41.436 filename=/dev/nvme9n1 00:27:41.436 Could not set queue depth (nvme0n1) 00:27:41.436 Could not set queue depth (nvme10n1) 00:27:41.436 Could not set queue depth (nvme1n1) 00:27:41.436 Could not set queue depth (nvme2n1) 00:27:41.436 Could not set queue depth (nvme3n1) 00:27:41.436 Could not set queue depth (nvme4n1) 00:27:41.436 Could not set queue depth (nvme5n1) 00:27:41.436 Could not set queue depth (nvme6n1) 00:27:41.436 Could not set queue depth (nvme7n1) 00:27:41.436 Could not set queue depth (nvme8n1) 00:27:41.436 Could not set queue depth (nvme9n1) 00:27:41.436 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:41.436 fio-3.35 00:27:41.436 Starting 11 threads 00:27:53.684 00:27:53.684 job0: (groupid=0, jobs=1): err= 0: pid=714229: Tue Nov 5 12:42:20 2024 00:27:53.684 read: IOPS=149, BW=37.4MiB/s (39.2MB/s)(379MiB/10142msec) 00:27:53.684 slat (usec): min=12, max=205053, avg=6583.40, stdev=23859.58 00:27:53.684 clat (msec): min=34, max=829, avg=421.23, stdev=155.39 00:27:53.684 lat (msec): min=35, max=829, avg=427.82, stdev=157.23 00:27:53.684 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 169], 5.00th=[ 211], 10.00th=[ 232], 20.00th=[ 284], 00:27:53.685 | 30.00th=[ 321], 40.00th=[ 355], 50.00th=[ 384], 60.00th=[ 447], 00:27:53.685 | 70.00th=[ 514], 80.00th=[ 584], 90.00th=[ 642], 95.00th=[ 684], 00:27:53.685 | 99.00th=[ 785], 99.50th=[ 810], 99.90th=[ 827], 99.95th=[ 827], 00:27:53.685 | 99.99th=[ 827] 00:27:53.685 bw ( KiB/s): min=18944, max=63488, per=5.00%, avg=37171.20, stdev=13588.35, samples=20 00:27:53.685 iops : min= 74, max= 248, avg=145.20, stdev=53.08, samples=20 00:27:53.685 lat (msec) : 50=0.59%, 250=13.46%, 500=53.23%, 750=31.40%, 1000=1.32% 00:27:53.685 cpu : usr=0.05%, sys=0.60%, ctx=193, majf=0, minf=4097 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=1516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job1: (groupid=0, jobs=1): err= 0: pid=714230: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=288, BW=72.2MiB/s (75.7MB/s)(727MiB/10061msec) 00:27:53.685 slat (usec): min=12, max=247393, avg=3438.66, stdev=15193.67 00:27:53.685 clat (msec): min=25, max=804, avg=217.96, stdev=177.12 00:27:53.685 lat (msec): min=25, max=848, avg=221.40, stdev=179.62 00:27:53.685 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 67], 20.00th=[ 88], 00:27:53.685 | 30.00th=[ 107], 40.00th=[ 123], 50.00th=[ 157], 60.00th=[ 188], 00:27:53.685 | 70.00th=[ 247], 80.00th=[ 321], 90.00th=[ 514], 95.00th=[ 651], 00:27:53.685 | 99.00th=[ 726], 99.50th=[ 768], 99.90th=[ 802], 99.95th=[ 802], 00:27:53.685 | 99.99th=[ 802] 00:27:53.685 bw ( KiB/s): min=18432, max=192512, per=9.79%, avg=72780.80, stdev=53145.11, samples=20 00:27:53.685 iops : min= 72, max= 752, avg=284.30, stdev=207.60, samples=20 00:27:53.685 lat (msec) : 50=7.40%, 100=19.44%, 250=43.74%, 500=18.75%, 750=9.91% 00:27:53.685 lat (msec) : 1000=0.76% 00:27:53.685 cpu : usr=0.17%, sys=1.00%, ctx=408, majf=0, minf=4097 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=2906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job2: (groupid=0, jobs=1): err= 0: pid=714231: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=202, BW=50.5MiB/s (53.0MB/s)(513MiB/10157msec) 00:27:53.685 slat (usec): min=8, max=319852, avg=3724.92, stdev=18589.17 00:27:53.685 clat (usec): min=1390, max=900499, avg=312658.57, stdev=229841.78 00:27:53.685 lat (usec): min=1863, max=919722, avg=316383.49, stdev=233125.18 00:27:53.685 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 54], 20.00th=[ 105], 00:27:53.685 | 30.00th=[ 138], 40.00th=[ 165], 50.00th=[ 226], 60.00th=[ 418], 00:27:53.685 | 70.00th=[ 481], 80.00th=[ 550], 90.00th=[ 651], 95.00th=[ 693], 00:27:53.685 | 99.00th=[ 818], 99.50th=[ 835], 99.90th=[ 902], 99.95th=[ 902], 00:27:53.685 | 99.99th=[ 902] 00:27:53.685 bw ( KiB/s): min=19968, max=139030, per=6.85%, avg=50957.90, stdev=34479.42, samples=20 00:27:53.685 iops : min= 78, max= 543, avg=199.05, stdev=134.67, samples=20 00:27:53.685 lat (msec) : 2=0.15%, 4=1.75%, 10=2.00%, 20=2.00%, 50=3.65% 00:27:53.685 lat (msec) : 100=9.06%, 250=34.68%, 500=20.46%, 750=24.16%, 1000=2.09% 00:27:53.685 cpu : usr=0.04%, sys=0.54%, ctx=460, majf=0, minf=4097 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=2053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job3: (groupid=0, jobs=1): err= 0: pid=714232: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=315, BW=78.8MiB/s (82.6MB/s)(799MiB/10139msec) 00:27:53.685 slat (usec): min=8, max=232088, avg=2510.02, stdev=13555.60 00:27:53.685 clat (usec): min=886, max=888313, avg=200403.19, stdev=199795.71 00:27:53.685 lat (usec): min=941, max=888342, avg=202913.21, stdev=202406.39 00:27:53.685 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 22], 00:27:53.685 | 30.00th=[ 72], 40.00th=[ 87], 50.00th=[ 107], 60.00th=[ 188], 00:27:53.685 | 70.00th=[ 279], 80.00th=[ 388], 90.00th=[ 502], 95.00th=[ 617], 00:27:53.685 | 99.00th=[ 760], 99.50th=[ 785], 99.90th=[ 860], 99.95th=[ 885], 00:27:53.685 | 99.99th=[ 885] 00:27:53.685 bw ( KiB/s): min=25088, max=182272, per=10.79%, avg=80182.60, stdev=57184.38, samples=20 00:27:53.685 iops : min= 98, max= 712, avg=313.20, stdev=223.39, samples=20 00:27:53.685 lat (usec) : 1000=0.13% 00:27:53.685 lat (msec) : 2=0.22%, 4=4.35%, 10=10.98%, 20=3.50%, 50=4.97% 00:27:53.685 lat (msec) : 100=24.97%, 250=17.93%, 500=22.40%, 750=9.20%, 1000=1.35% 00:27:53.685 cpu : usr=0.11%, sys=0.79%, ctx=977, majf=0, minf=4097 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=3196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job4: (groupid=0, jobs=1): err= 0: pid=714233: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=209, BW=52.3MiB/s (54.8MB/s)(530MiB/10138msec) 00:27:53.685 slat (usec): min=8, max=235354, avg=2485.19, stdev=12987.22 00:27:53.685 clat (msec): min=17, max=802, avg=303.48, stdev=178.38 00:27:53.685 lat (msec): min=17, max=802, avg=305.96, stdev=179.53 00:27:53.685 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 46], 5.00th=[ 77], 10.00th=[ 105], 20.00th=[ 132], 00:27:53.685 | 30.00th=[ 182], 40.00th=[ 207], 50.00th=[ 255], 60.00th=[ 326], 00:27:53.685 | 70.00th=[ 397], 80.00th=[ 481], 90.00th=[ 575], 95.00th=[ 642], 00:27:53.685 | 99.00th=[ 701], 99.50th=[ 726], 99.90th=[ 802], 99.95th=[ 802], 00:27:53.685 | 99.99th=[ 802] 00:27:53.685 bw ( KiB/s): min=26624, max=160768, per=7.08%, avg=52633.60, stdev=33485.04, samples=20 00:27:53.685 iops : min= 104, max= 628, avg=205.60, stdev=130.80, samples=20 00:27:53.685 lat (msec) : 20=0.14%, 50=1.27%, 100=6.94%, 250=40.49%, 500=33.88% 00:27:53.685 lat (msec) : 750=17.08%, 1000=0.19% 00:27:53.685 cpu : usr=0.10%, sys=0.66%, ctx=297, majf=0, minf=4097 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=2119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job5: (groupid=0, jobs=1): err= 0: pid=714234: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=377, BW=94.5MiB/s (99.1MB/s)(960MiB/10156msec) 00:27:53.685 slat (usec): min=7, max=656382, avg=1661.68, stdev=15821.15 00:27:53.685 clat (usec): min=768, max=1272.8k, avg=167566.12, stdev=179703.35 00:27:53.685 lat (usec): min=793, max=1272.8k, avg=169227.79, stdev=181929.09 00:27:53.685 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 10], 20.00th=[ 40], 00:27:53.685 | 30.00th=[ 46], 40.00th=[ 54], 50.00th=[ 95], 60.00th=[ 150], 00:27:53.685 | 70.00th=[ 201], 80.00th=[ 275], 90.00th=[ 447], 95.00th=[ 542], 00:27:53.685 | 99.00th=[ 768], 99.50th=[ 818], 99.90th=[ 860], 99.95th=[ 1200], 00:27:53.685 | 99.99th=[ 1267] 00:27:53.685 bw ( KiB/s): min=27648, max=316928, per=13.68%, avg=101699.37, stdev=82418.26, samples=19 00:27:53.685 iops : min= 108, max= 1238, avg=397.26, stdev=321.95, samples=19 00:27:53.685 lat (usec) : 1000=0.08% 00:27:53.685 lat (msec) : 2=0.52%, 4=1.41%, 10=8.18%, 20=1.12%, 50=24.02% 00:27:53.685 lat (msec) : 100=15.63%, 250=26.47%, 500=15.06%, 750=6.23%, 1000=1.22% 00:27:53.685 lat (msec) : 2000=0.05% 00:27:53.685 cpu : usr=0.20%, sys=0.98%, ctx=878, majf=0, minf=4097 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=3838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job6: (groupid=0, jobs=1): err= 0: pid=714235: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=466, BW=117MiB/s (122MB/s)(1174MiB/10070msec) 00:27:53.685 slat (usec): min=8, max=426842, avg=1533.17, stdev=13170.62 00:27:53.685 clat (msec): min=2, max=792, avg=135.66, stdev=164.56 00:27:53.685 lat (msec): min=2, max=1040, avg=137.20, stdev=166.70 00:27:53.685 clat percentiles (msec): 00:27:53.685 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 36], 00:27:53.685 | 30.00th=[ 38], 40.00th=[ 42], 50.00th=[ 57], 60.00th=[ 80], 00:27:53.685 | 70.00th=[ 128], 80.00th=[ 205], 90.00th=[ 401], 95.00th=[ 523], 00:27:53.685 | 99.00th=[ 709], 99.50th=[ 718], 99.90th=[ 776], 99.95th=[ 793], 00:27:53.685 | 99.99th=[ 793] 00:27:53.685 bw ( KiB/s): min=11264, max=445440, per=15.95%, avg=118553.60, stdev=120589.28, samples=20 00:27:53.685 iops : min= 44, max= 1740, avg=463.10, stdev=471.05, samples=20 00:27:53.685 lat (msec) : 4=0.06%, 10=1.64%, 20=2.47%, 50=41.80%, 100=19.02% 00:27:53.685 lat (msec) : 250=18.41%, 500=10.82%, 750=5.58%, 1000=0.19% 00:27:53.685 cpu : usr=0.27%, sys=1.27%, ctx=995, majf=0, minf=3721 00:27:53.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:53.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.685 issued rwts: total=4694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.685 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.685 job7: (groupid=0, jobs=1): err= 0: pid=714236: Tue Nov 5 12:42:20 2024 00:27:53.685 read: IOPS=233, BW=58.4MiB/s (61.2MB/s)(593MiB/10154msec) 00:27:53.685 slat (usec): min=9, max=366691, avg=3115.75, stdev=19024.12 00:27:53.686 clat (msec): min=2, max=1018, avg=270.64, stdev=215.91 00:27:53.686 lat (msec): min=2, max=1018, avg=273.76, stdev=218.50 00:27:53.686 clat percentiles (msec): 00:27:53.686 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 61], 20.00th=[ 88], 00:27:53.686 | 30.00th=[ 118], 40.00th=[ 148], 50.00th=[ 192], 60.00th=[ 296], 00:27:53.686 | 70.00th=[ 363], 80.00th=[ 447], 90.00th=[ 575], 95.00th=[ 676], 00:27:53.686 | 99.00th=[ 978], 99.50th=[ 995], 99.90th=[ 1020], 99.95th=[ 1020], 00:27:53.686 | 99.99th=[ 1020] 00:27:53.686 bw ( KiB/s): min=19456, max=168448, per=7.95%, avg=59110.40, stdev=39559.20, samples=20 00:27:53.686 iops : min= 76, max= 658, avg=230.90, stdev=154.53, samples=20 00:27:53.686 lat (msec) : 4=0.17%, 10=3.67%, 20=0.59%, 50=5.23%, 100=15.43% 00:27:53.686 lat (msec) : 250=31.62%, 500=27.49%, 750=12.48%, 1000=2.91%, 2000=0.42% 00:27:53.686 cpu : usr=0.10%, sys=0.63%, ctx=549, majf=0, minf=4098 00:27:53.686 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.3% 00:27:53.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.686 issued rwts: total=2372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.686 job8: (groupid=0, jobs=1): err= 0: pid=714239: Tue Nov 5 12:42:20 2024 00:27:53.686 read: IOPS=132, BW=33.1MiB/s (34.7MB/s)(337MiB/10160msec) 00:27:53.686 slat (usec): min=10, max=221691, avg=4794.94, stdev=20363.83 00:27:53.686 clat (msec): min=22, max=879, avg=477.89, stdev=168.68 00:27:53.686 lat (msec): min=22, max=930, avg=482.69, stdev=171.84 00:27:53.686 clat percentiles (msec): 00:27:53.686 | 1.00th=[ 35], 5.00th=[ 105], 10.00th=[ 234], 20.00th=[ 368], 00:27:53.686 | 30.00th=[ 426], 40.00th=[ 464], 50.00th=[ 502], 60.00th=[ 535], 00:27:53.686 | 70.00th=[ 567], 80.00th=[ 609], 90.00th=[ 684], 95.00th=[ 709], 00:27:53.686 | 99.00th=[ 818], 99.50th=[ 844], 99.90th=[ 877], 99.95th=[ 877], 00:27:53.686 | 99.99th=[ 877] 00:27:53.686 bw ( KiB/s): min=19968, max=48640, per=4.41%, avg=32819.20, stdev=7945.60, samples=20 00:27:53.686 iops : min= 78, max= 190, avg=128.20, stdev=31.04, samples=20 00:27:53.686 lat (msec) : 50=2.23%, 100=2.67%, 250=6.02%, 500=38.63%, 750=47.85% 00:27:53.686 lat (msec) : 1000=2.60% 00:27:53.686 cpu : usr=0.05%, sys=0.47%, ctx=256, majf=0, minf=4097 00:27:53.686 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:27:53.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.686 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.686 issued rwts: total=1346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.686 job9: (groupid=0, jobs=1): err= 0: pid=714240: Tue Nov 5 12:42:20 2024 00:27:53.686 read: IOPS=316, BW=79.2MiB/s (83.0MB/s)(803MiB/10138msec) 00:27:53.686 slat (usec): min=8, max=311202, avg=1653.21, stdev=12518.77 00:27:53.686 clat (usec): min=746, max=826994, avg=200260.44, stdev=188666.44 00:27:53.686 lat (usec): min=772, max=921771, avg=201913.66, stdev=190459.59 00:27:53.686 clat percentiles (msec): 00:27:53.686 | 1.00th=[ 5], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 53], 00:27:53.686 | 30.00th=[ 63], 40.00th=[ 87], 50.00th=[ 133], 60.00th=[ 165], 00:27:53.686 | 70.00th=[ 245], 80.00th=[ 351], 90.00th=[ 531], 95.00th=[ 625], 00:27:53.686 | 99.00th=[ 701], 99.50th=[ 726], 99.90th=[ 810], 99.95th=[ 818], 00:27:53.686 | 99.99th=[ 827] 00:27:53.686 bw ( KiB/s): min=23040, max=299008, per=10.84%, avg=80563.20, stdev=72335.15, samples=20 00:27:53.686 iops : min= 90, max= 1168, avg=314.70, stdev=282.56, samples=20 00:27:53.686 lat (usec) : 750=0.03%, 1000=0.03% 00:27:53.686 lat (msec) : 2=0.16%, 4=0.53%, 10=1.15%, 20=1.49%, 50=13.48% 00:27:53.686 lat (msec) : 100=25.35%, 250=28.81%, 500=17.88%, 750=10.71%, 1000=0.37% 00:27:53.686 cpu : usr=0.11%, sys=0.85%, ctx=690, majf=0, minf=4097 00:27:53.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:53.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.686 issued rwts: total=3211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.686 job10: (groupid=0, jobs=1): err= 0: pid=714241: Tue Nov 5 12:42:20 2024 00:27:53.686 read: IOPS=222, BW=55.6MiB/s (58.3MB/s)(564MiB/10135msec) 00:27:53.686 slat (usec): min=12, max=210714, avg=4436.80, stdev=17770.16 00:27:53.686 clat (msec): min=42, max=689, avg=283.11, stdev=157.73 00:27:53.686 lat (msec): min=42, max=716, avg=287.54, stdev=160.43 00:27:53.686 clat percentiles (msec): 00:27:53.686 | 1.00th=[ 56], 5.00th=[ 80], 10.00th=[ 91], 20.00th=[ 126], 00:27:53.686 | 30.00th=[ 161], 40.00th=[ 209], 50.00th=[ 271], 60.00th=[ 313], 00:27:53.686 | 70.00th=[ 363], 80.00th=[ 447], 90.00th=[ 518], 95.00th=[ 558], 00:27:53.686 | 99.00th=[ 625], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 651], 00:27:53.686 | 99.99th=[ 693] 00:27:53.686 bw ( KiB/s): min=26624, max=136704, per=7.54%, avg=56069.60, stdev=33981.99, samples=20 00:27:53.686 iops : min= 104, max= 534, avg=219.00, stdev=132.74, samples=20 00:27:53.686 lat (msec) : 50=0.40%, 100=11.40%, 250=34.61%, 500=41.66%, 750=11.93% 00:27:53.686 cpu : usr=0.13%, sys=0.79%, ctx=236, majf=0, minf=4097 00:27:53.686 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:27:53.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:53.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:53.686 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:53.686 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:53.686 00:27:53.686 Run status group 0 (all jobs): 00:27:53.686 READ: bw=726MiB/s (761MB/s), 33.1MiB/s-117MiB/s (34.7MB/s-122MB/s), io=7376MiB (7735MB), run=10061-10160msec 00:27:53.686 00:27:53.686 Disk stats (read/write): 00:27:53.686 nvme0n1: ios=2873/0, merge=0/0, ticks=1220969/0, in_queue=1220969, util=97.30% 00:27:53.686 nvme10n1: ios=5641/0, merge=0/0, ticks=1237990/0, in_queue=1237990, util=97.51% 00:27:53.686 nvme1n1: ios=3966/0, merge=0/0, ticks=1204312/0, in_queue=1204312, util=97.75% 00:27:53.686 nvme2n1: ios=6226/0, merge=0/0, ticks=1225057/0, in_queue=1225057, util=97.88% 00:27:53.686 nvme3n1: ios=4085/0, merge=0/0, ticks=1241741/0, in_queue=1241741, util=97.96% 00:27:53.686 nvme4n1: ios=7553/0, merge=0/0, ticks=1215248/0, in_queue=1215248, util=98.27% 00:27:53.686 nvme5n1: ios=9191/0, merge=0/0, ticks=1225700/0, in_queue=1225700, util=98.43% 00:27:53.686 nvme6n1: ios=4606/0, merge=0/0, ticks=1197788/0, in_queue=1197788, util=98.53% 00:27:53.686 nvme7n1: ios=2541/0, merge=0/0, ticks=1212399/0, in_queue=1212399, util=98.93% 00:27:53.686 nvme8n1: ios=6270/0, merge=0/0, ticks=1233214/0, in_queue=1233214, util=99.11% 00:27:53.686 nvme9n1: ios=4368/0, merge=0/0, ticks=1233075/0, in_queue=1233075, util=99.23% 00:27:53.686 12:42:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:53.686 [global] 00:27:53.686 thread=1 00:27:53.686 invalidate=1 00:27:53.686 rw=randwrite 00:27:53.686 time_based=1 00:27:53.686 runtime=10 00:27:53.686 ioengine=libaio 00:27:53.686 direct=1 00:27:53.686 bs=262144 00:27:53.686 iodepth=64 00:27:53.686 norandommap=1 00:27:53.686 numjobs=1 00:27:53.686 00:27:53.686 [job0] 00:27:53.686 filename=/dev/nvme0n1 00:27:53.686 [job1] 00:27:53.686 filename=/dev/nvme10n1 00:27:53.686 [job2] 00:27:53.686 filename=/dev/nvme1n1 00:27:53.686 [job3] 00:27:53.686 filename=/dev/nvme2n1 00:27:53.686 [job4] 00:27:53.686 filename=/dev/nvme3n1 00:27:53.686 [job5] 00:27:53.686 filename=/dev/nvme4n1 00:27:53.686 [job6] 00:27:53.686 filename=/dev/nvme5n1 00:27:53.686 [job7] 00:27:53.686 filename=/dev/nvme6n1 00:27:53.686 [job8] 00:27:53.686 filename=/dev/nvme7n1 00:27:53.686 [job9] 00:27:53.686 filename=/dev/nvme8n1 00:27:53.686 [job10] 00:27:53.686 filename=/dev/nvme9n1 00:27:53.686 Could not set queue depth (nvme0n1) 00:27:53.686 Could not set queue depth (nvme10n1) 00:27:53.686 Could not set queue depth (nvme1n1) 00:27:53.686 Could not set queue depth (nvme2n1) 00:27:53.686 Could not set queue depth (nvme3n1) 00:27:53.686 Could not set queue depth (nvme4n1) 00:27:53.686 Could not set queue depth (nvme5n1) 00:27:53.686 Could not set queue depth (nvme6n1) 00:27:53.686 Could not set queue depth (nvme7n1) 00:27:53.686 Could not set queue depth (nvme8n1) 00:27:53.686 Could not set queue depth (nvme9n1) 00:27:53.686 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.686 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:53.687 fio-3.35 00:27:53.687 Starting 11 threads 00:28:03.650 00:28:03.650 job0: (groupid=0, jobs=1): err= 0: pid=714975: Tue Nov 5 12:42:31 2024 00:28:03.650 write: IOPS=253, BW=63.3MiB/s (66.4MB/s)(640MiB/10101msec); 0 zone resets 00:28:03.650 slat (usec): min=21, max=149632, avg=2969.19, stdev=7928.82 00:28:03.650 clat (usec): min=1385, max=605343, avg=249647.25, stdev=134665.03 00:28:03.650 lat (usec): min=1438, max=613796, avg=252616.44, stdev=136317.76 00:28:03.650 clat percentiles (msec): 00:28:03.650 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 35], 20.00th=[ 161], 00:28:03.650 | 30.00th=[ 197], 40.00th=[ 224], 50.00th=[ 245], 60.00th=[ 266], 00:28:03.650 | 70.00th=[ 300], 80.00th=[ 376], 90.00th=[ 435], 95.00th=[ 481], 00:28:03.650 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:28:03.650 | 99.99th=[ 609] 00:28:03.650 bw ( KiB/s): min=27648, max=146944, per=6.22%, avg=63872.00, stdev=26130.16, samples=20 00:28:03.650 iops : min= 108, max= 574, avg=249.50, stdev=102.07, samples=20 00:28:03.650 lat (msec) : 2=0.12%, 4=1.56%, 10=2.07%, 20=3.56%, 50=4.57% 00:28:03.650 lat (msec) : 100=3.09%, 250=38.39%, 500=43.04%, 750=3.60% 00:28:03.650 cpu : usr=0.78%, sys=0.91%, ctx=1308, majf=0, minf=1 00:28:03.650 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:28:03.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.650 issued rwts: total=0,2558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.650 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.650 job1: (groupid=0, jobs=1): err= 0: pid=714978: Tue Nov 5 12:42:31 2024 00:28:03.650 write: IOPS=499, BW=125MiB/s (131MB/s)(1267MiB/10142msec); 0 zone resets 00:28:03.650 slat (usec): min=17, max=128495, avg=1166.29, stdev=4584.00 00:28:03.650 clat (usec): min=750, max=627840, avg=126860.89, stdev=113604.20 00:28:03.650 lat (usec): min=787, max=627923, avg=128027.18, stdev=114844.16 00:28:03.650 clat percentiles (msec): 00:28:03.650 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 22], 20.00th=[ 43], 00:28:03.650 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 85], 60.00th=[ 121], 00:28:03.650 | 70.00th=[ 155], 80.00th=[ 213], 90.00th=[ 284], 95.00th=[ 347], 00:28:03.650 | 99.00th=[ 550], 99.50th=[ 592], 99.90th=[ 617], 99.95th=[ 625], 00:28:03.650 | 99.99th=[ 625] 00:28:03.650 bw ( KiB/s): min=30208, max=328704, per=12.47%, avg=128128.00, stdev=90485.10, samples=20 00:28:03.650 iops : min= 118, max= 1284, avg=500.50, stdev=353.46, samples=20 00:28:03.650 lat (usec) : 1000=0.12% 00:28:03.650 lat (msec) : 2=0.43%, 4=1.22%, 10=1.72%, 20=5.92%, 50=17.01% 00:28:03.650 lat (msec) : 100=27.58%, 250=31.51%, 500=13.06%, 750=1.42% 00:28:03.650 cpu : usr=1.75%, sys=1.85%, ctx=3192, majf=0, minf=1 00:28:03.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:03.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.650 issued rwts: total=0,5068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.650 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.650 job2: (groupid=0, jobs=1): err= 0: pid=714988: Tue Nov 5 12:42:31 2024 00:28:03.650 write: IOPS=292, BW=73.2MiB/s (76.8MB/s)(745MiB/10174msec); 0 zone resets 00:28:03.650 slat (usec): min=18, max=57750, avg=1939.03, stdev=5956.73 00:28:03.650 clat (usec): min=1392, max=575469, avg=216442.52, stdev=132862.74 00:28:03.650 lat (usec): min=1429, max=575497, avg=218381.55, stdev=134046.98 00:28:03.650 clat percentiles (msec): 00:28:03.650 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 101], 00:28:03.650 | 30.00th=[ 140], 40.00th=[ 171], 50.00th=[ 213], 60.00th=[ 245], 00:28:03.650 | 70.00th=[ 288], 80.00th=[ 347], 90.00th=[ 397], 95.00th=[ 451], 00:28:03.650 | 99.00th=[ 518], 99.50th=[ 542], 99.90th=[ 575], 99.95th=[ 575], 00:28:03.650 | 99.99th=[ 575] 00:28:03.650 bw ( KiB/s): min=34816, max=140800, per=7.26%, avg=74649.65, stdev=28048.16, samples=20 00:28:03.650 iops : min= 136, max= 550, avg=291.55, stdev=109.62, samples=20 00:28:03.650 lat (msec) : 2=0.07%, 4=0.57%, 10=2.48%, 20=3.86%, 50=8.73% 00:28:03.651 lat (msec) : 100=4.33%, 250=41.36%, 500=37.40%, 750=1.21% 00:28:03.651 cpu : usr=0.97%, sys=1.05%, ctx=1867, majf=0, minf=1 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,2979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job3: (groupid=0, jobs=1): err= 0: pid=714989: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=248, BW=62.2MiB/s (65.3MB/s)(631MiB/10142msec); 0 zone resets 00:28:03.651 slat (usec): min=14, max=96125, avg=3061.27, stdev=7703.27 00:28:03.651 clat (msec): min=2, max=606, avg=253.90, stdev=133.32 00:28:03.651 lat (msec): min=2, max=612, avg=256.96, stdev=135.22 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 31], 5.00th=[ 54], 10.00th=[ 90], 20.00th=[ 144], 00:28:03.651 | 30.00th=[ 167], 40.00th=[ 199], 50.00th=[ 230], 60.00th=[ 271], 00:28:03.651 | 70.00th=[ 330], 80.00th=[ 384], 90.00th=[ 451], 95.00th=[ 477], 00:28:03.651 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:28:03.651 | 99.99th=[ 609] 00:28:03.651 bw ( KiB/s): min=33792, max=141824, per=6.13%, avg=63014.50, stdev=27667.82, samples=20 00:28:03.651 iops : min= 132, max= 554, avg=246.10, stdev=108.08, samples=20 00:28:03.651 lat (msec) : 4=0.28%, 10=0.36%, 20=0.08%, 50=3.92%, 100=7.68% 00:28:03.651 lat (msec) : 250=41.70%, 500=43.45%, 750=2.53% 00:28:03.651 cpu : usr=0.88%, sys=0.90%, ctx=1216, majf=0, minf=1 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,2525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job4: (groupid=0, jobs=1): err= 0: pid=714990: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=396, BW=99.1MiB/s (104MB/s)(1008MiB/10175msec); 0 zone resets 00:28:03.651 slat (usec): min=16, max=132538, avg=1438.09, stdev=5199.96 00:28:03.651 clat (usec): min=913, max=656997, avg=159949.69, stdev=111248.21 00:28:03.651 lat (usec): min=958, max=657045, avg=161387.78, stdev=112105.47 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 42], 20.00th=[ 62], 00:28:03.651 | 30.00th=[ 89], 40.00th=[ 110], 50.00th=[ 133], 60.00th=[ 157], 00:28:03.651 | 70.00th=[ 201], 80.00th=[ 262], 90.00th=[ 330], 95.00th=[ 368], 00:28:03.651 | 99.00th=[ 481], 99.50th=[ 514], 99.90th=[ 625], 99.95th=[ 634], 00:28:03.651 | 99.99th=[ 659] 00:28:03.651 bw ( KiB/s): min=40016, max=175616, per=9.89%, avg=101600.40, stdev=35574.97, samples=20 00:28:03.651 iops : min= 156, max= 686, avg=396.85, stdev=138.99, samples=20 00:28:03.651 lat (usec) : 1000=0.02% 00:28:03.651 lat (msec) : 2=0.15%, 4=0.69%, 10=0.87%, 20=2.38%, 50=9.10% 00:28:03.651 lat (msec) : 100=21.47%, 250=42.82%, 500=21.80%, 750=0.69% 00:28:03.651 cpu : usr=1.19%, sys=1.26%, ctx=2514, majf=0, minf=1 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,4033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job5: (groupid=0, jobs=1): err= 0: pid=714991: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=491, BW=123MiB/s (129MB/s)(1250MiB/10173msec); 0 zone resets 00:28:03.651 slat (usec): min=17, max=33030, avg=1624.19, stdev=4030.58 00:28:03.651 clat (usec): min=1125, max=437203, avg=128496.57, stdev=87099.30 00:28:03.651 lat (usec): min=1677, max=437281, avg=130120.76, stdev=87994.63 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 6], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 46], 00:28:03.651 | 30.00th=[ 62], 40.00th=[ 88], 50.00th=[ 107], 60.00th=[ 129], 00:28:03.651 | 70.00th=[ 167], 80.00th=[ 218], 90.00th=[ 262], 95.00th=[ 288], 00:28:03.651 | 99.00th=[ 359], 99.50th=[ 401], 99.90th=[ 426], 99.95th=[ 430], 00:28:03.651 | 99.99th=[ 439] 00:28:03.651 bw ( KiB/s): min=43008, max=360960, per=12.29%, avg=126352.05, stdev=77793.74, samples=20 00:28:03.651 iops : min= 168, max= 1410, avg=493.55, stdev=303.89, samples=20 00:28:03.651 lat (msec) : 2=0.08%, 4=0.58%, 10=1.56%, 20=1.98%, 50=19.44% 00:28:03.651 lat (msec) : 100=22.66%, 250=40.91%, 500=12.78% 00:28:03.651 cpu : usr=1.62%, sys=1.63%, ctx=1862, majf=0, minf=1 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,4999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job6: (groupid=0, jobs=1): err= 0: pid=714992: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=529, BW=132MiB/s (139MB/s)(1330MiB/10040msec); 0 zone resets 00:28:03.651 slat (usec): min=16, max=218642, avg=787.44, stdev=5280.07 00:28:03.651 clat (usec): min=761, max=631134, avg=120002.48, stdev=124270.45 00:28:03.651 lat (usec): min=795, max=631162, avg=120789.93, stdev=124882.07 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 18], 20.00th=[ 36], 00:28:03.651 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 69], 60.00th=[ 88], 00:28:03.651 | 70.00th=[ 126], 80.00th=[ 184], 90.00th=[ 305], 95.00th=[ 401], 00:28:03.651 | 99.00th=[ 567], 99.50th=[ 592], 99.90th=[ 625], 99.95th=[ 625], 00:28:03.651 | 99.99th=[ 634] 00:28:03.651 bw ( KiB/s): min=35840, max=296448, per=13.09%, avg=134506.90, stdev=82599.82, samples=20 00:28:03.651 iops : min= 140, max= 1158, avg=525.40, stdev=322.64, samples=20 00:28:03.651 lat (usec) : 1000=0.13% 00:28:03.651 lat (msec) : 2=0.55%, 4=0.81%, 10=3.80%, 20=6.81%, 50=16.92% 00:28:03.651 lat (msec) : 100=33.38%, 250=22.83%, 500=12.75%, 750=2.03% 00:28:03.651 cpu : usr=1.55%, sys=1.72%, ctx=3665, majf=0, minf=2 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,5318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job7: (groupid=0, jobs=1): err= 0: pid=714993: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=389, BW=97.5MiB/s (102MB/s)(992MiB/10173msec); 0 zone resets 00:28:03.651 slat (usec): min=16, max=104600, avg=1394.70, stdev=4883.16 00:28:03.651 clat (usec): min=936, max=585996, avg=162650.35, stdev=121639.81 00:28:03.651 lat (usec): min=987, max=590330, avg=164045.05, stdev=122704.33 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 49], 00:28:03.651 | 30.00th=[ 75], 40.00th=[ 108], 50.00th=[ 138], 60.00th=[ 169], 00:28:03.651 | 70.00th=[ 234], 80.00th=[ 271], 90.00th=[ 330], 95.00th=[ 380], 00:28:03.651 | 99.00th=[ 510], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 575], 00:28:03.651 | 99.99th=[ 584] 00:28:03.651 bw ( KiB/s): min=49152, max=235008, per=9.73%, avg=99946.30, stdev=50406.34, samples=20 00:28:03.651 iops : min= 192, max= 918, avg=390.35, stdev=196.90, samples=20 00:28:03.651 lat (usec) : 1000=0.03% 00:28:03.651 lat (msec) : 2=0.48%, 4=0.45%, 10=2.45%, 20=8.97%, 50=9.10% 00:28:03.651 lat (msec) : 100=16.08%, 250=35.77%, 500=25.49%, 750=1.18% 00:28:03.651 cpu : usr=1.36%, sys=1.33%, ctx=2732, majf=0, minf=1 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,3967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job8: (groupid=0, jobs=1): err= 0: pid=714994: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=310, BW=77.5MiB/s (81.3MB/s)(786MiB/10140msec); 0 zone resets 00:28:03.651 slat (usec): min=21, max=125990, avg=2618.35, stdev=6787.89 00:28:03.651 clat (msec): min=2, max=539, avg=203.61, stdev=118.27 00:28:03.651 lat (msec): min=2, max=539, avg=206.23, stdev=119.90 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 10], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 75], 00:28:03.651 | 30.00th=[ 110], 40.00th=[ 178], 50.00th=[ 213], 60.00th=[ 241], 00:28:03.651 | 70.00th=[ 271], 80.00th=[ 305], 90.00th=[ 351], 95.00th=[ 409], 00:28:03.651 | 99.00th=[ 489], 99.50th=[ 506], 99.90th=[ 535], 99.95th=[ 535], 00:28:03.651 | 99.99th=[ 542] 00:28:03.651 bw ( KiB/s): min=36864, max=237056, per=7.67%, avg=78859.15, stdev=50467.35, samples=20 00:28:03.651 iops : min= 144, max= 926, avg=308.00, stdev=197.09, samples=20 00:28:03.651 lat (msec) : 4=0.10%, 10=0.95%, 20=1.02%, 50=9.16%, 100=17.21% 00:28:03.651 lat (msec) : 250=34.26%, 500=36.55%, 750=0.76% 00:28:03.651 cpu : usr=0.97%, sys=0.96%, ctx=1519, majf=0, minf=1 00:28:03.651 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:28:03.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.651 issued rwts: total=0,3144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.651 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.651 job9: (groupid=0, jobs=1): err= 0: pid=714995: Tue Nov 5 12:42:31 2024 00:28:03.651 write: IOPS=274, BW=68.6MiB/s (71.9MB/s)(693MiB/10100msec); 0 zone resets 00:28:03.651 slat (usec): min=16, max=125459, avg=2412.44, stdev=6992.78 00:28:03.651 clat (msec): min=5, max=518, avg=230.66, stdev=123.10 00:28:03.651 lat (msec): min=5, max=536, avg=233.08, stdev=124.45 00:28:03.651 clat percentiles (msec): 00:28:03.651 | 1.00th=[ 16], 5.00th=[ 38], 10.00th=[ 78], 20.00th=[ 103], 00:28:03.651 | 30.00th=[ 155], 40.00th=[ 197], 50.00th=[ 222], 60.00th=[ 255], 00:28:03.651 | 70.00th=[ 288], 80.00th=[ 338], 90.00th=[ 418], 95.00th=[ 451], 00:28:03.651 | 99.00th=[ 481], 99.50th=[ 502], 99.90th=[ 514], 99.95th=[ 518], 00:28:03.651 | 99.99th=[ 518] 00:28:03.651 bw ( KiB/s): min=34816, max=183808, per=6.75%, avg=69350.40, stdev=34770.98, samples=20 00:28:03.652 iops : min= 136, max= 718, avg=270.90, stdev=135.82, samples=20 00:28:03.652 lat (msec) : 10=0.32%, 20=1.23%, 50=4.69%, 100=12.45%, 250=40.37% 00:28:03.652 lat (msec) : 500=40.48%, 750=0.47% 00:28:03.652 cpu : usr=0.95%, sys=0.89%, ctx=1528, majf=0, minf=1 00:28:03.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:28:03.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.652 issued rwts: total=0,2772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.652 job10: (groupid=0, jobs=1): err= 0: pid=714996: Tue Nov 5 12:42:31 2024 00:28:03.652 write: IOPS=344, BW=86.2MiB/s (90.4MB/s)(871MiB/10104msec); 0 zone resets 00:28:03.652 slat (usec): min=14, max=162832, avg=1535.93, stdev=6162.52 00:28:03.652 clat (usec): min=821, max=734586, avg=184049.18, stdev=132510.06 00:28:03.652 lat (usec): min=845, max=734626, avg=185585.11, stdev=133859.02 00:28:03.652 clat percentiles (msec): 00:28:03.652 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 24], 20.00th=[ 54], 00:28:03.652 | 30.00th=[ 101], 40.00th=[ 126], 50.00th=[ 174], 60.00th=[ 209], 00:28:03.652 | 70.00th=[ 236], 80.00th=[ 296], 90.00th=[ 363], 95.00th=[ 422], 00:28:03.652 | 99.00th=[ 567], 99.50th=[ 659], 99.90th=[ 726], 99.95th=[ 735], 00:28:03.652 | 99.99th=[ 735] 00:28:03.652 bw ( KiB/s): min=20480, max=213504, per=8.52%, avg=87543.15, stdev=44250.34, samples=20 00:28:03.652 iops : min= 80, max= 834, avg=341.90, stdev=172.82, samples=20 00:28:03.652 lat (usec) : 1000=0.11% 00:28:03.652 lat (msec) : 2=0.40%, 4=1.38%, 10=2.87%, 20=3.59%, 50=10.82% 00:28:03.652 lat (msec) : 100=10.82%, 250=44.07%, 500=23.72%, 750=2.21% 00:28:03.652 cpu : usr=0.92%, sys=1.31%, ctx=2408, majf=0, minf=1 00:28:03.652 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:28:03.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:03.652 issued rwts: total=0,3483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.652 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:03.652 00:28:03.652 Run status group 0 (all jobs): 00:28:03.652 WRITE: bw=1004MiB/s (1052MB/s), 62.2MiB/s-132MiB/s (65.3MB/s-139MB/s), io=9.97GiB (10.7GB), run=10040-10175msec 00:28:03.652 00:28:03.652 Disk stats (read/write): 00:28:03.652 nvme0n1: ios=49/4892, merge=0/0, ticks=2217/1210796, in_queue=1213013, util=99.97% 00:28:03.652 nvme10n1: ios=46/9985, merge=0/0, ticks=2066/1218298, in_queue=1220364, util=100.00% 00:28:03.652 nvme1n1: ios=45/5801, merge=0/0, ticks=2034/1218441, in_queue=1220475, util=100.00% 00:28:03.652 nvme2n1: ios=0/4891, merge=0/0, ticks=0/1211650, in_queue=1211650, util=97.75% 00:28:03.652 nvme3n1: ios=0/7906, merge=0/0, ticks=0/1218377, in_queue=1218377, util=97.83% 00:28:03.652 nvme4n1: ios=46/9801, merge=0/0, ticks=612/1210222, in_queue=1210834, util=100.00% 00:28:03.652 nvme5n1: ios=0/10389, merge=0/0, ticks=0/1237024, in_queue=1237024, util=98.32% 00:28:03.652 nvme6n1: ios=0/7778, merge=0/0, ticks=0/1218976, in_queue=1218976, util=98.40% 00:28:03.652 nvme7n1: ios=39/6131, merge=0/0, ticks=1812/1200671, in_queue=1202483, util=99.81% 00:28:03.652 nvme8n1: ios=0/5359, merge=0/0, ticks=0/1215371, in_queue=1215371, util=98.88% 00:28:03.652 nvme9n1: ios=0/6778, merge=0/0, ticks=0/1225098, in_queue=1225098, util=99.05% 00:28:03.652 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:03.652 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:03.652 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.652 12:42:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:03.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:03.652 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:03.652 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.652 12:42:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:03.910 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:03.910 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:04.168 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.168 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:04.426 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:04.426 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.426 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:04.684 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:04.684 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.684 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.942 12:42:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:04.942 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:04.942 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:04.942 rmmod nvme_tcp 00:28:04.942 rmmod nvme_fabrics 00:28:04.942 rmmod nvme_keyring 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 709353 ']' 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 709353 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 709353 ']' 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 709353 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:04.942 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 709353 00:28:05.200 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:05.200 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:05.200 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 709353' 00:28:05.200 killing process with pid 709353 00:28:05.200 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 709353 00:28:05.200 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 709353 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.458 12:42:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.994 00:28:07.994 real 1m0.470s 00:28:07.994 user 3m31.070s 00:28:07.994 sys 0m17.008s 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:07.994 ************************************ 00:28:07.994 END TEST nvmf_multiconnection 00:28:07.994 ************************************ 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:07.994 ************************************ 00:28:07.994 START TEST nvmf_initiator_timeout 00:28:07.994 ************************************ 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:07.994 * Looking for test storage... 00:28:07.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:07.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.994 --rc genhtml_branch_coverage=1 00:28:07.994 --rc genhtml_function_coverage=1 00:28:07.994 --rc genhtml_legend=1 00:28:07.994 --rc geninfo_all_blocks=1 00:28:07.994 --rc geninfo_unexecuted_blocks=1 00:28:07.994 00:28:07.994 ' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:07.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.994 --rc genhtml_branch_coverage=1 00:28:07.994 --rc genhtml_function_coverage=1 00:28:07.994 --rc genhtml_legend=1 00:28:07.994 --rc geninfo_all_blocks=1 00:28:07.994 --rc geninfo_unexecuted_blocks=1 00:28:07.994 00:28:07.994 ' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:07.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.994 --rc genhtml_branch_coverage=1 00:28:07.994 --rc genhtml_function_coverage=1 00:28:07.994 --rc genhtml_legend=1 00:28:07.994 --rc geninfo_all_blocks=1 00:28:07.994 --rc geninfo_unexecuted_blocks=1 00:28:07.994 00:28:07.994 ' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:07.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.994 --rc genhtml_branch_coverage=1 00:28:07.994 --rc genhtml_function_coverage=1 00:28:07.994 --rc genhtml_legend=1 00:28:07.994 --rc geninfo_all_blocks=1 00:28:07.994 --rc geninfo_unexecuted_blocks=1 00:28:07.994 00:28:07.994 ' 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.994 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.995 12:42:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.898 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.899 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:28:10.157 00:28:10.157 --- 10.0.0.2 ping statistics --- 00:28:10.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.157 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:28:10.157 00:28:10.157 --- 10.0.0.1 ping statistics --- 00:28:10.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.157 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=718190 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 718190 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 718190 ']' 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:10.157 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.157 [2024-11-05 12:42:39.303269] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:28:10.157 [2024-11-05 12:42:39.303342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.157 [2024-11-05 12:42:39.382207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.415 [2024-11-05 12:42:39.430146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.415 [2024-11-05 12:42:39.430234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.415 [2024-11-05 12:42:39.430258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.415 [2024-11-05 12:42:39.430268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.415 [2024-11-05 12:42:39.430278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.415 [2024-11-05 12:42:39.431836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.415 [2024-11-05 12:42:39.431907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.415 [2024-11-05 12:42:39.431974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.415 [2024-11-05 12:42:39.431976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 Malloc0 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 Delay0 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 [2024-11-05 12:42:39.605158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:10.415 [2024-11-05 12:42:39.633428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.415 12:42:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:11.347 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:11.347 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:28:11.347 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:28:11.347 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:28:11.347 12:42:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=718615 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:13.243 12:42:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:13.243 [global] 00:28:13.243 thread=1 00:28:13.243 invalidate=1 00:28:13.243 rw=write 00:28:13.243 time_based=1 00:28:13.243 runtime=60 00:28:13.243 ioengine=libaio 00:28:13.243 direct=1 00:28:13.243 bs=4096 00:28:13.243 iodepth=1 00:28:13.243 norandommap=0 00:28:13.243 numjobs=1 00:28:13.244 00:28:13.244 verify_dump=1 00:28:13.244 verify_backlog=512 00:28:13.244 verify_state_save=0 00:28:13.244 do_verify=1 00:28:13.244 verify=crc32c-intel 00:28:13.244 [job0] 00:28:13.244 filename=/dev/nvme0n1 00:28:13.244 Could not set queue depth (nvme0n1) 00:28:13.244 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:13.244 fio-3.35 00:28:13.244 Starting 1 thread 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.521 true 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.521 true 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.521 true 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:16.521 true 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.521 12:42:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.054 true 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.054 true 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:19.054 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.055 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.312 true 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:19.312 true 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:19.312 12:42:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 718615 00:29:15.520 00:29:15.520 job0: (groupid=0, jobs=1): err= 0: pid=718684: Tue Nov 5 12:43:42 2024 00:29:15.520 read: IOPS=180, BW=721KiB/s (739kB/s)(42.3MiB/60022msec) 00:29:15.520 slat (usec): min=4, max=12439, avg=14.20, stdev=119.83 00:29:15.520 clat (usec): min=197, max=41232k, avg=5304.46, stdev=396404.22 00:29:15.520 lat (usec): min=202, max=41233k, avg=5318.66, stdev=396404.26 00:29:15.520 clat percentiles (usec): 00:29:15.520 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:29:15.520 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:29:15.520 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 338], 95.00th=[ 363], 00:29:15.520 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:15.520 | 99.99th=[42206] 00:29:15.520 write: IOPS=187, BW=751KiB/s (769kB/s)(44.0MiB/60022msec); 0 zone resets 00:29:15.520 slat (nsec): min=5704, max=77280, avg=13575.94, stdev=7987.94 00:29:15.520 clat (usec): min=155, max=532, avg=198.00, stdev=39.09 00:29:15.520 lat (usec): min=162, max=549, avg=211.58, stdev=42.81 00:29:15.520 clat percentiles (usec): 00:29:15.520 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:29:15.520 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 192], 00:29:15.520 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 231], 95.00th=[ 281], 00:29:15.520 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 437], 99.95th=[ 453], 00:29:15.520 | 99.99th=[ 498] 00:29:15.520 bw ( KiB/s): min= 4096, max= 9792, per=100.00%, avg=6931.69, stdev=2126.95, samples=13 00:29:15.520 iops : min= 1024, max= 2448, avg=1732.92, stdev=531.74, samples=13 00:29:15.520 lat (usec) : 250=74.61%, 500=23.90%, 750=0.01% 00:29:15.520 lat (msec) : 4=0.01%, 50=1.47%, >=2000=0.01% 00:29:15.520 cpu : usr=0.26%, sys=0.52%, ctx=22087, majf=0, minf=1 00:29:15.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:15.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:15.520 issued rwts: total=10822,11264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:15.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:15.520 00:29:15.520 Run status group 0 (all jobs): 00:29:15.520 READ: bw=721KiB/s (739kB/s), 721KiB/s-721KiB/s (739kB/s-739kB/s), io=42.3MiB (44.3MB), run=60022-60022msec 00:29:15.520 WRITE: bw=751KiB/s (769kB/s), 751KiB/s-751KiB/s (769kB/s-769kB/s), io=44.0MiB (46.1MB), run=60022-60022msec 00:29:15.520 00:29:15.520 Disk stats (read/write): 00:29:15.520 nvme0n1: ios=10918/11264, merge=0/0, ticks=17182/2120, in_queue=19302, util=99.91% 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:15.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:15.520 nvmf hotplug test: fio successful as expected 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.520 rmmod nvme_tcp 00:29:15.520 rmmod nvme_fabrics 00:29:15.520 rmmod nvme_keyring 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 718190 ']' 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 718190 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 718190 ']' 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 718190 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 718190 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:15.520 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 718190' 00:29:15.520 killing process with pid 718190 00:29:15.521 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 718190 00:29:15.521 12:43:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 718190 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.521 12:43:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.162 00:29:16.162 real 1m8.319s 00:29:16.162 user 4m11.022s 00:29:16.162 sys 0m6.669s 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:16.162 ************************************ 00:29:16.162 END TEST nvmf_initiator_timeout 00:29:16.162 ************************************ 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.162 12:43:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:18.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:18.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.063 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:18.064 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:18.064 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:18.064 ************************************ 00:29:18.064 START TEST nvmf_perf_adq 00:29:18.064 ************************************ 00:29:18.064 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:18.322 * Looking for test storage... 00:29:18.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:18.322 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.323 --rc genhtml_branch_coverage=1 00:29:18.323 --rc genhtml_function_coverage=1 00:29:18.323 --rc genhtml_legend=1 00:29:18.323 --rc geninfo_all_blocks=1 00:29:18.323 --rc geninfo_unexecuted_blocks=1 00:29:18.323 00:29:18.323 ' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.323 --rc genhtml_branch_coverage=1 00:29:18.323 --rc genhtml_function_coverage=1 00:29:18.323 --rc genhtml_legend=1 00:29:18.323 --rc geninfo_all_blocks=1 00:29:18.323 --rc geninfo_unexecuted_blocks=1 00:29:18.323 00:29:18.323 ' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.323 --rc genhtml_branch_coverage=1 00:29:18.323 --rc genhtml_function_coverage=1 00:29:18.323 --rc genhtml_legend=1 00:29:18.323 --rc geninfo_all_blocks=1 00:29:18.323 --rc geninfo_unexecuted_blocks=1 00:29:18.323 00:29:18.323 ' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.323 --rc genhtml_branch_coverage=1 00:29:18.323 --rc genhtml_function_coverage=1 00:29:18.323 --rc genhtml_legend=1 00:29:18.323 --rc geninfo_all_blocks=1 00:29:18.323 --rc geninfo_unexecuted_blocks=1 00:29:18.323 00:29:18.323 ' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:18.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.323 12:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:20.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:20.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:20.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:20.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:20.855 12:43:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:21.423 12:43:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:23.950 12:43:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.219 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:29.220 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:29.220 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:29.220 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:29.220 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:29:29.220 00:29:29.220 --- 10.0.0.2 ping statistics --- 00:29:29.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.220 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:29:29.220 00:29:29.220 --- 10.0.0.1 ping statistics --- 00:29:29.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.220 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.220 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=730334 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 730334 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 730334 ']' 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.221 12:43:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 [2024-11-05 12:43:57.880714] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:29:29.221 [2024-11-05 12:43:57.880788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.221 [2024-11-05 12:43:57.955419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.221 [2024-11-05 12:43:58.001176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.221 [2024-11-05 12:43:58.001229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.221 [2024-11-05 12:43:58.001257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.221 [2024-11-05 12:43:58.001269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.221 [2024-11-05 12:43:58.001278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.221 [2024-11-05 12:43:58.002766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.221 [2024-11-05 12:43:58.002830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.221 [2024-11-05 12:43:58.002908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.221 [2024-11-05 12:43:58.002913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 [2024-11-05 12:43:58.275074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 Malloc1 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:29.221 [2024-11-05 12:43:58.337454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=730469 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:29.221 12:43:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:31.118 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:31.118 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.118 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:31.376 "tick_rate": 2700000000, 00:29:31.376 "poll_groups": [ 00:29:31.376 { 00:29:31.376 "name": "nvmf_tgt_poll_group_000", 00:29:31.376 "admin_qpairs": 1, 00:29:31.376 "io_qpairs": 1, 00:29:31.376 "current_admin_qpairs": 1, 00:29:31.376 "current_io_qpairs": 1, 00:29:31.376 "pending_bdev_io": 0, 00:29:31.376 "completed_nvme_io": 20096, 00:29:31.376 "transports": [ 00:29:31.376 { 00:29:31.376 "trtype": "TCP" 00:29:31.376 } 00:29:31.376 ] 00:29:31.376 }, 00:29:31.376 { 00:29:31.376 "name": "nvmf_tgt_poll_group_001", 00:29:31.376 "admin_qpairs": 0, 00:29:31.376 "io_qpairs": 1, 00:29:31.376 "current_admin_qpairs": 0, 00:29:31.376 "current_io_qpairs": 1, 00:29:31.376 "pending_bdev_io": 0, 00:29:31.376 "completed_nvme_io": 19348, 00:29:31.376 "transports": [ 00:29:31.376 { 00:29:31.376 "trtype": "TCP" 00:29:31.376 } 00:29:31.376 ] 00:29:31.376 }, 00:29:31.376 { 00:29:31.376 "name": "nvmf_tgt_poll_group_002", 00:29:31.376 "admin_qpairs": 0, 00:29:31.376 "io_qpairs": 1, 00:29:31.376 "current_admin_qpairs": 0, 00:29:31.376 "current_io_qpairs": 1, 00:29:31.376 "pending_bdev_io": 0, 00:29:31.376 "completed_nvme_io": 20583, 00:29:31.376 "transports": [ 00:29:31.376 { 00:29:31.376 "trtype": "TCP" 00:29:31.376 } 00:29:31.376 ] 00:29:31.376 }, 00:29:31.376 { 00:29:31.376 "name": "nvmf_tgt_poll_group_003", 00:29:31.376 "admin_qpairs": 0, 00:29:31.376 "io_qpairs": 1, 00:29:31.376 "current_admin_qpairs": 0, 00:29:31.376 "current_io_qpairs": 1, 00:29:31.376 "pending_bdev_io": 0, 00:29:31.376 "completed_nvme_io": 19705, 00:29:31.376 "transports": [ 00:29:31.376 { 00:29:31.376 "trtype": "TCP" 00:29:31.376 } 00:29:31.376 ] 00:29:31.376 } 00:29:31.376 ] 00:29:31.376 }' 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:31.376 12:44:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 730469 00:29:39.481 Initializing NVMe Controllers 00:29:39.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:39.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:39.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:39.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:39.481 Initialization complete. Launching workers. 00:29:39.481 ======================================================== 00:29:39.481 Latency(us) 00:29:39.481 Device Information : IOPS MiB/s Average min max 00:29:39.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10462.30 40.87 6118.66 2430.58 10626.81 00:29:39.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10182.80 39.78 6287.54 2500.36 10517.99 00:29:39.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10801.30 42.19 5926.61 2566.74 8962.98 00:29:39.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10648.50 41.60 6009.84 2201.25 10799.82 00:29:39.481 ======================================================== 00:29:39.481 Total : 42094.90 164.43 6082.71 2201.25 10799.82 00:29:39.481 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.481 rmmod nvme_tcp 00:29:39.481 rmmod nvme_fabrics 00:29:39.481 rmmod nvme_keyring 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 730334 ']' 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 730334 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 730334 ']' 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 730334 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 730334 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 730334' 00:29:39.481 killing process with pid 730334 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 730334 00:29:39.481 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 730334 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.739 12:44:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:41.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:41.642 12:44:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:42.577 12:44:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:45.104 12:44:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.381 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.381 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.381 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.381 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.381 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.382 12:44:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:29:50.382 00:29:50.382 --- 10.0.0.2 ping statistics --- 00:29:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.382 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:29:50.382 00:29:50.382 --- 10.0.0.1 ping statistics --- 00:29:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.382 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:50.382 net.core.busy_poll = 1 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:50.382 net.core.busy_read = 1 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=733095 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 733095 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 733095 ']' 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.382 [2024-11-05 12:44:19.238651] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:29:50.382 [2024-11-05 12:44:19.238731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.382 [2024-11-05 12:44:19.314570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.382 [2024-11-05 12:44:19.360673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.382 [2024-11-05 12:44:19.360729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.382 [2024-11-05 12:44:19.360758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.382 [2024-11-05 12:44:19.360769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.382 [2024-11-05 12:44:19.360778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.382 [2024-11-05 12:44:19.362267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.382 [2024-11-05 12:44:19.362333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.382 [2024-11-05 12:44:19.362399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.382 [2024-11-05 12:44:19.362401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.382 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.639 [2024-11-05 12:44:19.635918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.639 Malloc1 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:50.639 [2024-11-05 12:44:19.701900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=733129 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:50.639 12:44:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:52.535 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:52.535 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.535 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:52.535 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.535 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:52.535 "tick_rate": 2700000000, 00:29:52.535 "poll_groups": [ 00:29:52.535 { 00:29:52.535 "name": "nvmf_tgt_poll_group_000", 00:29:52.535 "admin_qpairs": 1, 00:29:52.535 "io_qpairs": 4, 00:29:52.535 "current_admin_qpairs": 1, 00:29:52.535 "current_io_qpairs": 4, 00:29:52.535 "pending_bdev_io": 0, 00:29:52.535 "completed_nvme_io": 33035, 00:29:52.535 "transports": [ 00:29:52.535 { 00:29:52.535 "trtype": "TCP" 00:29:52.535 } 00:29:52.535 ] 00:29:52.535 }, 00:29:52.535 { 00:29:52.535 "name": "nvmf_tgt_poll_group_001", 00:29:52.535 "admin_qpairs": 0, 00:29:52.535 "io_qpairs": 0, 00:29:52.535 "current_admin_qpairs": 0, 00:29:52.535 "current_io_qpairs": 0, 00:29:52.535 "pending_bdev_io": 0, 00:29:52.535 "completed_nvme_io": 0, 00:29:52.535 "transports": [ 00:29:52.535 { 00:29:52.535 "trtype": "TCP" 00:29:52.535 } 00:29:52.535 ] 00:29:52.535 }, 00:29:52.535 { 00:29:52.535 "name": "nvmf_tgt_poll_group_002", 00:29:52.535 "admin_qpairs": 0, 00:29:52.535 "io_qpairs": 0, 00:29:52.535 "current_admin_qpairs": 0, 00:29:52.535 "current_io_qpairs": 0, 00:29:52.535 "pending_bdev_io": 0, 00:29:52.535 "completed_nvme_io": 0, 00:29:52.535 "transports": [ 00:29:52.535 { 00:29:52.535 "trtype": "TCP" 00:29:52.535 } 00:29:52.535 ] 00:29:52.535 }, 00:29:52.535 { 00:29:52.535 "name": "nvmf_tgt_poll_group_003", 00:29:52.535 "admin_qpairs": 0, 00:29:52.535 "io_qpairs": 0, 00:29:52.535 "current_admin_qpairs": 0, 00:29:52.535 "current_io_qpairs": 0, 00:29:52.535 "pending_bdev_io": 0, 00:29:52.535 "completed_nvme_io": 0, 00:29:52.535 "transports": [ 00:29:52.535 { 00:29:52.536 "trtype": "TCP" 00:29:52.536 } 00:29:52.536 ] 00:29:52.536 } 00:29:52.536 ] 00:29:52.536 }' 00:29:52.536 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:52.536 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:52.536 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:29:52.536 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:29:52.536 12:44:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 733129 00:30:02.508 Initializing NVMe Controllers 00:30:02.508 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:30:02.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:30:02.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:30:02.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:30:02.509 Initialization complete. Launching workers. 00:30:02.509 ======================================================== 00:30:02.509 Latency(us) 00:30:02.509 Device Information : IOPS MiB/s Average min max 00:30:02.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4411.30 17.23 14558.53 1866.90 62228.16 00:30:02.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4891.30 19.11 13091.57 2048.10 59951.94 00:30:02.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4066.90 15.89 15812.33 1869.47 63057.86 00:30:02.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4348.30 16.99 14767.35 1687.01 60856.57 00:30:02.509 ======================================================== 00:30:02.509 Total : 17717.80 69.21 14492.59 1687.01 63057.86 00:30:02.509 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.509 rmmod nvme_tcp 00:30:02.509 rmmod nvme_fabrics 00:30:02.509 rmmod nvme_keyring 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 733095 ']' 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 733095 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 733095 ']' 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 733095 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:02.509 12:44:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 733095 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 733095' 00:30:02.509 killing process with pid 733095 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 733095 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 733095 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.509 12:44:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:30:03.162 00:30:03.162 real 0m45.030s 00:30:03.162 user 2m40.735s 00:30:03.162 sys 0m8.999s 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:03.162 ************************************ 00:30:03.162 END TEST nvmf_perf_adq 00:30:03.162 ************************************ 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:03.162 ************************************ 00:30:03.162 START TEST nvmf_shutdown 00:30:03.162 ************************************ 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:03.162 * Looking for test storage... 00:30:03.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:30:03.162 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.421 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:03.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.421 --rc genhtml_branch_coverage=1 00:30:03.422 --rc genhtml_function_coverage=1 00:30:03.422 --rc genhtml_legend=1 00:30:03.422 --rc geninfo_all_blocks=1 00:30:03.422 --rc geninfo_unexecuted_blocks=1 00:30:03.422 00:30:03.422 ' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:03.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.422 --rc genhtml_branch_coverage=1 00:30:03.422 --rc genhtml_function_coverage=1 00:30:03.422 --rc genhtml_legend=1 00:30:03.422 --rc geninfo_all_blocks=1 00:30:03.422 --rc geninfo_unexecuted_blocks=1 00:30:03.422 00:30:03.422 ' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:03.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.422 --rc genhtml_branch_coverage=1 00:30:03.422 --rc genhtml_function_coverage=1 00:30:03.422 --rc genhtml_legend=1 00:30:03.422 --rc geninfo_all_blocks=1 00:30:03.422 --rc geninfo_unexecuted_blocks=1 00:30:03.422 00:30:03.422 ' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:03.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.422 --rc genhtml_branch_coverage=1 00:30:03.422 --rc genhtml_function_coverage=1 00:30:03.422 --rc genhtml_legend=1 00:30:03.422 --rc geninfo_all_blocks=1 00:30:03.422 --rc geninfo_unexecuted_blocks=1 00:30:03.422 00:30:03.422 ' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:03.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:03.422 ************************************ 00:30:03.422 START TEST nvmf_shutdown_tc1 00:30:03.422 ************************************ 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:03.422 12:44:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:05.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.954 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:05.955 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:05.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:05.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:30:05.955 00:30:05.955 --- 10.0.0.2 ping statistics --- 00:30:05.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.955 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:30:05.955 00:30:05.955 --- 10.0.0.1 ping statistics --- 00:30:05.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.955 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=736298 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 736298 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 736298 ']' 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.955 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:05.956 12:44:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.956 [2024-11-05 12:44:34.867957] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:05.956 [2024-11-05 12:44:34.868032] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.956 [2024-11-05 12:44:34.943833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.956 [2024-11-05 12:44:34.991920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.956 [2024-11-05 12:44:34.991990] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.956 [2024-11-05 12:44:34.992024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.956 [2024-11-05 12:44:34.992036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.956 [2024-11-05 12:44:34.992045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.956 [2024-11-05 12:44:34.993661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.956 [2024-11-05 12:44:34.993725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.956 [2024-11-05 12:44:34.993823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.956 [2024-11-05 12:44:34.993826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.956 [2024-11-05 12:44:35.134663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.956 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:06.214 Malloc1 00:30:06.214 [2024-11-05 12:44:35.232041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.214 Malloc2 00:30:06.214 Malloc3 00:30:06.214 Malloc4 00:30:06.214 Malloc5 00:30:06.471 Malloc6 00:30:06.471 Malloc7 00:30:06.471 Malloc8 00:30:06.471 Malloc9 00:30:06.471 Malloc10 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=736471 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 736471 /var/tmp/bdevperf.sock 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 736471 ']' 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:06.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:06.471 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.471 { 00:30:06.471 "params": { 00:30:06.471 "name": "Nvme$subsystem", 00:30:06.471 "trtype": "$TEST_TRANSPORT", 00:30:06.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.471 "adrfam": "ipv4", 00:30:06.471 "trsvcid": "$NVMF_PORT", 00:30:06.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.472 "hdgst": ${hdgst:-false}, 00:30:06.472 "ddgst": ${ddgst:-false} 00:30:06.472 }, 00:30:06.472 "method": "bdev_nvme_attach_controller" 00:30:06.472 } 00:30:06.472 EOF 00:30:06.472 )") 00:30:06.472 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.729 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.729 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.729 { 00:30:06.729 "params": { 00:30:06.729 "name": "Nvme$subsystem", 00:30:06.729 "trtype": "$TEST_TRANSPORT", 00:30:06.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.729 "adrfam": "ipv4", 00:30:06.729 "trsvcid": "$NVMF_PORT", 00:30:06.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.729 "hdgst": ${hdgst:-false}, 00:30:06.729 "ddgst": ${ddgst:-false} 00:30:06.729 }, 00:30:06.729 "method": "bdev_nvme_attach_controller" 00:30:06.729 } 00:30:06.729 EOF 00:30:06.729 )") 00:30:06.729 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.729 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.729 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.729 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.730 { 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme$subsystem", 00:30:06.730 "trtype": "$TEST_TRANSPORT", 00:30:06.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "$NVMF_PORT", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.730 "hdgst": ${hdgst:-false}, 00:30:06.730 "ddgst": ${ddgst:-false} 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 } 00:30:06.730 EOF 00:30:06.730 )") 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:06.730 12:44:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme1", 00:30:06.730 "trtype": "tcp", 00:30:06.730 "traddr": "10.0.0.2", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "4420", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.730 "hdgst": false, 00:30:06.730 "ddgst": false 00:30:06.730 }, 00:30:06.730 "method": "bdev_nvme_attach_controller" 00:30:06.730 },{ 00:30:06.730 "params": { 00:30:06.730 "name": "Nvme2", 00:30:06.730 "trtype": "tcp", 00:30:06.730 "traddr": "10.0.0.2", 00:30:06.730 "adrfam": "ipv4", 00:30:06.730 "trsvcid": "4420", 00:30:06.730 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme3", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme4", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme5", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme6", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme7", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme8", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme9", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 },{ 00:30:06.731 "params": { 00:30:06.731 "name": "Nvme10", 00:30:06.731 "trtype": "tcp", 00:30:06.731 "traddr": "10.0.0.2", 00:30:06.731 "adrfam": "ipv4", 00:30:06.731 "trsvcid": "4420", 00:30:06.731 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:06.731 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:06.731 "hdgst": false, 00:30:06.731 "ddgst": false 00:30:06.731 }, 00:30:06.731 "method": "bdev_nvme_attach_controller" 00:30:06.731 }' 00:30:06.731 [2024-11-05 12:44:35.756452] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:06.731 [2024-11-05 12:44:35.756523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:06.731 [2024-11-05 12:44:35.833268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.731 [2024-11-05 12:44:35.880138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 736471 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:08.626 12:44:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:09.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 736471 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 736298 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.997 { 00:30:09.997 "params": { 00:30:09.997 "name": "Nvme$subsystem", 00:30:09.997 "trtype": "$TEST_TRANSPORT", 00:30:09.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.997 "adrfam": "ipv4", 00:30:09.997 "trsvcid": "$NVMF_PORT", 00:30:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.997 "hdgst": ${hdgst:-false}, 00:30:09.997 "ddgst": ${ddgst:-false} 00:30:09.997 }, 00:30:09.997 "method": "bdev_nvme_attach_controller" 00:30:09.997 } 00:30:09.997 EOF 00:30:09.997 )") 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.997 { 00:30:09.997 "params": { 00:30:09.997 "name": "Nvme$subsystem", 00:30:09.997 "trtype": "$TEST_TRANSPORT", 00:30:09.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.997 "adrfam": "ipv4", 00:30:09.997 "trsvcid": "$NVMF_PORT", 00:30:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.997 "hdgst": ${hdgst:-false}, 00:30:09.997 "ddgst": ${ddgst:-false} 00:30:09.997 }, 00:30:09.997 "method": "bdev_nvme_attach_controller" 00:30:09.997 } 00:30:09.997 EOF 00:30:09.997 )") 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.997 { 00:30:09.997 "params": { 00:30:09.997 "name": "Nvme$subsystem", 00:30:09.997 "trtype": "$TEST_TRANSPORT", 00:30:09.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.997 "adrfam": "ipv4", 00:30:09.997 "trsvcid": "$NVMF_PORT", 00:30:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.997 "hdgst": ${hdgst:-false}, 00:30:09.997 "ddgst": ${ddgst:-false} 00:30:09.997 }, 00:30:09.997 "method": "bdev_nvme_attach_controller" 00:30:09.997 } 00:30:09.997 EOF 00:30:09.997 )") 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.997 { 00:30:09.997 "params": { 00:30:09.997 "name": "Nvme$subsystem", 00:30:09.997 "trtype": "$TEST_TRANSPORT", 00:30:09.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.997 "adrfam": "ipv4", 00:30:09.997 "trsvcid": "$NVMF_PORT", 00:30:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.997 "hdgst": ${hdgst:-false}, 00:30:09.997 "ddgst": ${ddgst:-false} 00:30:09.997 }, 00:30:09.997 "method": "bdev_nvme_attach_controller" 00:30:09.997 } 00:30:09.997 EOF 00:30:09.997 )") 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.997 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.997 { 00:30:09.997 "params": { 00:30:09.997 "name": "Nvme$subsystem", 00:30:09.997 "trtype": "$TEST_TRANSPORT", 00:30:09.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.997 "adrfam": "ipv4", 00:30:09.997 "trsvcid": "$NVMF_PORT", 00:30:09.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.997 "hdgst": ${hdgst:-false}, 00:30:09.997 "ddgst": ${ddgst:-false} 00:30:09.997 }, 00:30:09.997 "method": "bdev_nvme_attach_controller" 00:30:09.997 } 00:30:09.997 EOF 00:30:09.998 )") 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.998 { 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme$subsystem", 00:30:09.998 "trtype": "$TEST_TRANSPORT", 00:30:09.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "$NVMF_PORT", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.998 "hdgst": ${hdgst:-false}, 00:30:09.998 "ddgst": ${ddgst:-false} 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 } 00:30:09.998 EOF 00:30:09.998 )") 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.998 { 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme$subsystem", 00:30:09.998 "trtype": "$TEST_TRANSPORT", 00:30:09.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "$NVMF_PORT", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.998 "hdgst": ${hdgst:-false}, 00:30:09.998 "ddgst": ${ddgst:-false} 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 } 00:30:09.998 EOF 00:30:09.998 )") 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.998 { 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme$subsystem", 00:30:09.998 "trtype": "$TEST_TRANSPORT", 00:30:09.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "$NVMF_PORT", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.998 "hdgst": ${hdgst:-false}, 00:30:09.998 "ddgst": ${ddgst:-false} 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 } 00:30:09.998 EOF 00:30:09.998 )") 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.998 { 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme$subsystem", 00:30:09.998 "trtype": "$TEST_TRANSPORT", 00:30:09.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "$NVMF_PORT", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.998 "hdgst": ${hdgst:-false}, 00:30:09.998 "ddgst": ${ddgst:-false} 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 } 00:30:09.998 EOF 00:30:09.998 )") 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.998 { 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme$subsystem", 00:30:09.998 "trtype": "$TEST_TRANSPORT", 00:30:09.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "$NVMF_PORT", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.998 "hdgst": ${hdgst:-false}, 00:30:09.998 "ddgst": ${ddgst:-false} 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 } 00:30:09.998 EOF 00:30:09.998 )") 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:09.998 12:44:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme1", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme2", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme3", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme4", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme5", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme6", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme7", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme8", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme9", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.998 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:09.998 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:09.998 "hdgst": false, 00:30:09.998 "ddgst": false 00:30:09.998 }, 00:30:09.998 "method": "bdev_nvme_attach_controller" 00:30:09.998 },{ 00:30:09.998 "params": { 00:30:09.998 "name": "Nvme10", 00:30:09.998 "trtype": "tcp", 00:30:09.998 "traddr": "10.0.0.2", 00:30:09.998 "adrfam": "ipv4", 00:30:09.998 "trsvcid": "4420", 00:30:09.999 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:09.999 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:09.999 "hdgst": false, 00:30:09.999 "ddgst": false 00:30:09.999 }, 00:30:09.999 "method": "bdev_nvme_attach_controller" 00:30:09.999 }' 00:30:09.999 [2024-11-05 12:44:38.860768] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:09.999 [2024-11-05 12:44:38.860842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736892 ] 00:30:09.999 [2024-11-05 12:44:38.934970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.999 [2024-11-05 12:44:38.983824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.893 Running I/O for 1 seconds... 00:30:12.716 1808.00 IOPS, 113.00 MiB/s 00:30:12.716 Latency(us) 00:30:12.716 [2024-11-05T11:44:41.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme1n1 : 1.15 222.42 13.90 0.00 0.00 285115.92 22816.24 259425.47 00:30:12.716 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme2n1 : 1.17 219.02 13.69 0.00 0.00 285145.69 23301.69 262532.36 00:30:12.716 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme3n1 : 1.14 228.30 14.27 0.00 0.00 264176.55 20777.34 253211.69 00:30:12.716 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme4n1 : 1.17 273.13 17.07 0.00 0.00 220350.99 6650.69 265639.25 00:30:12.716 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme5n1 : 1.16 220.00 13.75 0.00 0.00 270594.09 22816.24 259425.47 00:30:12.716 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme6n1 : 1.16 221.04 13.82 0.00 0.00 264760.89 21942.42 254765.13 00:30:12.716 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme7n1 : 1.11 229.97 14.37 0.00 0.00 249044.01 18447.17 268746.15 00:30:12.716 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme8n1 : 1.18 270.40 16.90 0.00 0.00 208835.77 12427.57 243891.01 00:30:12.716 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme9n1 : 1.18 217.78 13.61 0.00 0.00 255815.68 20874.43 268746.15 00:30:12.716 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:12.716 Verification LBA range: start 0x0 length 0x400 00:30:12.716 Nvme10n1 : 1.18 216.99 13.56 0.00 0.00 252638.25 19223.89 285834.05 00:30:12.716 [2024-11-05T11:44:41.954Z] =================================================================================================================== 00:30:12.716 [2024-11-05T11:44:41.954Z] Total : 2319.04 144.94 0.00 0.00 253708.39 6650.69 285834.05 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.973 rmmod nvme_tcp 00:30:12.973 rmmod nvme_fabrics 00:30:12.973 rmmod nvme_keyring 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:12.973 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 736298 ']' 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 736298 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 736298 ']' 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 736298 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 736298 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 736298' 00:30:12.974 killing process with pid 736298 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 736298 00:30:12.974 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 736298 00:30:13.540 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.540 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.540 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.541 12:44:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.448 00:30:15.448 real 0m12.077s 00:30:15.448 user 0m35.219s 00:30:15.448 sys 0m3.335s 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.448 ************************************ 00:30:15.448 END TEST nvmf_shutdown_tc1 00:30:15.448 ************************************ 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:15.448 ************************************ 00:30:15.448 START TEST nvmf_shutdown_tc2 00:30:15.448 ************************************ 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.448 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:15.449 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:15.449 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:15.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:15.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.449 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:30:15.708 00:30:15.708 --- 10.0.0.2 ping statistics --- 00:30:15.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.708 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:30:15.708 00:30:15.708 --- 10.0.0.1 ping statistics --- 00:30:15.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.708 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=737659 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 737659 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 737659 ']' 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.708 12:44:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.708 [2024-11-05 12:44:44.875097] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:15.708 [2024-11-05 12:44:44.875182] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.708 [2024-11-05 12:44:44.947976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.966 [2024-11-05 12:44:44.992388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.966 [2024-11-05 12:44:44.992449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.966 [2024-11-05 12:44:44.992477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.966 [2024-11-05 12:44:44.992488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.966 [2024-11-05 12:44:44.992497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.966 [2024-11-05 12:44:44.993928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.966 [2024-11-05 12:44:44.994006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.966 [2024-11-05 12:44:44.994065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:15.966 [2024-11-05 12:44:44.994068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.966 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.967 [2024-11-05 12:44:45.136850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.967 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.225 Malloc1 00:30:16.225 [2024-11-05 12:44:45.230288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.225 Malloc2 00:30:16.225 Malloc3 00:30:16.225 Malloc4 00:30:16.225 Malloc5 00:30:16.225 Malloc6 00:30:16.482 Malloc7 00:30:16.482 Malloc8 00:30:16.482 Malloc9 00:30:16.482 Malloc10 00:30:16.482 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.482 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:16.482 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:16.482 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=737836 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 737836 /var/tmp/bdevperf.sock 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 737836 ']' 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:16.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.483 { 00:30:16.483 "params": { 00:30:16.483 "name": "Nvme$subsystem", 00:30:16.483 "trtype": "$TEST_TRANSPORT", 00:30:16.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.483 "adrfam": "ipv4", 00:30:16.483 "trsvcid": "$NVMF_PORT", 00:30:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.483 "hdgst": ${hdgst:-false}, 00:30:16.483 "ddgst": ${ddgst:-false} 00:30:16.483 }, 00:30:16.483 "method": "bdev_nvme_attach_controller" 00:30:16.483 } 00:30:16.483 EOF 00:30:16.483 )") 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.483 { 00:30:16.483 "params": { 00:30:16.483 "name": "Nvme$subsystem", 00:30:16.483 "trtype": "$TEST_TRANSPORT", 00:30:16.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.483 "adrfam": "ipv4", 00:30:16.483 "trsvcid": "$NVMF_PORT", 00:30:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.483 "hdgst": ${hdgst:-false}, 00:30:16.483 "ddgst": ${ddgst:-false} 00:30:16.483 }, 00:30:16.483 "method": "bdev_nvme_attach_controller" 00:30:16.483 } 00:30:16.483 EOF 00:30:16.483 )") 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.483 { 00:30:16.483 "params": { 00:30:16.483 "name": "Nvme$subsystem", 00:30:16.483 "trtype": "$TEST_TRANSPORT", 00:30:16.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.483 "adrfam": "ipv4", 00:30:16.483 "trsvcid": "$NVMF_PORT", 00:30:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.483 "hdgst": ${hdgst:-false}, 00:30:16.483 "ddgst": ${ddgst:-false} 00:30:16.483 }, 00:30:16.483 "method": "bdev_nvme_attach_controller" 00:30:16.483 } 00:30:16.483 EOF 00:30:16.483 )") 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.483 { 00:30:16.483 "params": { 00:30:16.483 "name": "Nvme$subsystem", 00:30:16.483 "trtype": "$TEST_TRANSPORT", 00:30:16.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.483 "adrfam": "ipv4", 00:30:16.483 "trsvcid": "$NVMF_PORT", 00:30:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.483 "hdgst": ${hdgst:-false}, 00:30:16.483 "ddgst": ${ddgst:-false} 00:30:16.483 }, 00:30:16.483 "method": "bdev_nvme_attach_controller" 00:30:16.483 } 00:30:16.483 EOF 00:30:16.483 )") 00:30:16.483 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.741 { 00:30:16.741 "params": { 00:30:16.741 "name": "Nvme$subsystem", 00:30:16.741 "trtype": "$TEST_TRANSPORT", 00:30:16.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.741 "adrfam": "ipv4", 00:30:16.741 "trsvcid": "$NVMF_PORT", 00:30:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.741 "hdgst": ${hdgst:-false}, 00:30:16.741 "ddgst": ${ddgst:-false} 00:30:16.741 }, 00:30:16.741 "method": "bdev_nvme_attach_controller" 00:30:16.741 } 00:30:16.741 EOF 00:30:16.741 )") 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.741 { 00:30:16.741 "params": { 00:30:16.741 "name": "Nvme$subsystem", 00:30:16.741 "trtype": "$TEST_TRANSPORT", 00:30:16.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.741 "adrfam": "ipv4", 00:30:16.741 "trsvcid": "$NVMF_PORT", 00:30:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.741 "hdgst": ${hdgst:-false}, 00:30:16.741 "ddgst": ${ddgst:-false} 00:30:16.741 }, 00:30:16.741 "method": "bdev_nvme_attach_controller" 00:30:16.741 } 00:30:16.741 EOF 00:30:16.741 )") 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.741 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.741 { 00:30:16.741 "params": { 00:30:16.741 "name": "Nvme$subsystem", 00:30:16.741 "trtype": "$TEST_TRANSPORT", 00:30:16.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.741 "adrfam": "ipv4", 00:30:16.741 "trsvcid": "$NVMF_PORT", 00:30:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.742 "hdgst": ${hdgst:-false}, 00:30:16.742 "ddgst": ${ddgst:-false} 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 } 00:30:16.742 EOF 00:30:16.742 )") 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.742 { 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme$subsystem", 00:30:16.742 "trtype": "$TEST_TRANSPORT", 00:30:16.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "$NVMF_PORT", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.742 "hdgst": ${hdgst:-false}, 00:30:16.742 "ddgst": ${ddgst:-false} 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 } 00:30:16.742 EOF 00:30:16.742 )") 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.742 { 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme$subsystem", 00:30:16.742 "trtype": "$TEST_TRANSPORT", 00:30:16.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "$NVMF_PORT", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.742 "hdgst": ${hdgst:-false}, 00:30:16.742 "ddgst": ${ddgst:-false} 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 } 00:30:16.742 EOF 00:30:16.742 )") 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.742 { 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme$subsystem", 00:30:16.742 "trtype": "$TEST_TRANSPORT", 00:30:16.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "$NVMF_PORT", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.742 "hdgst": ${hdgst:-false}, 00:30:16.742 "ddgst": ${ddgst:-false} 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 } 00:30:16.742 EOF 00:30:16.742 )") 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:30:16.742 12:44:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme1", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme2", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme3", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme4", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme5", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme6", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme7", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme8", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme9", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 },{ 00:30:16.742 "params": { 00:30:16.742 "name": "Nvme10", 00:30:16.742 "trtype": "tcp", 00:30:16.742 "traddr": "10.0.0.2", 00:30:16.742 "adrfam": "ipv4", 00:30:16.742 "trsvcid": "4420", 00:30:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:16.742 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:16.742 "hdgst": false, 00:30:16.742 "ddgst": false 00:30:16.742 }, 00:30:16.742 "method": "bdev_nvme_attach_controller" 00:30:16.742 }' 00:30:16.742 [2024-11-05 12:44:45.758506] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:16.743 [2024-11-05 12:44:45.758583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid737836 ] 00:30:16.743 [2024-11-05 12:44:45.831134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.743 [2024-11-05 12:44:45.877924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.115 Running I/O for 10 seconds... 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.681 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 737836 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 737836 ']' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 737836 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 737836 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 737836' 00:30:18.682 killing process with pid 737836 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 737836 00:30:18.682 12:44:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 737836 00:30:18.940 Received shutdown signal, test time was about 0.826473 seconds 00:30:18.940 00:30:18.940 Latency(us) 00:30:18.940 [2024-11-05T11:44:48.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme1n1 : 0.80 239.36 14.96 0.00 0.00 263076.98 19126.80 231463.44 00:30:18.940 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme2n1 : 0.81 237.69 14.86 0.00 0.00 259543.29 24855.13 267192.70 00:30:18.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme3n1 : 0.79 247.62 15.48 0.00 0.00 241240.34 3301.07 242337.56 00:30:18.940 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme4n1 : 0.78 245.30 15.33 0.00 0.00 238747.24 18058.81 251658.24 00:30:18.940 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme5n1 : 0.82 234.52 14.66 0.00 0.00 244930.18 22136.60 256318.58 00:30:18.940 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme6n1 : 0.82 233.68 14.61 0.00 0.00 239012.98 19612.25 259425.47 00:30:18.940 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme7n1 : 0.81 235.87 14.74 0.00 0.00 231355.23 20680.25 257872.02 00:30:18.940 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme8n1 : 0.80 240.54 15.03 0.00 0.00 220164.80 21651.15 251658.24 00:30:18.940 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme9n1 : 0.83 232.54 14.53 0.00 0.00 223472.70 20583.16 270299.59 00:30:18.940 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.940 Verification LBA range: start 0x0 length 0x400 00:30:18.940 Nvme10n1 : 0.78 165.00 10.31 0.00 0.00 301541.83 22524.97 292047.83 00:30:18.940 [2024-11-05T11:44:48.178Z] =================================================================================================================== 00:30:18.940 [2024-11-05T11:44:48.178Z] Total : 2312.11 144.51 0.00 0.00 244397.16 3301.07 292047.83 00:30:18.940 12:44:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 737659 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.311 rmmod nvme_tcp 00:30:20.311 rmmod nvme_fabrics 00:30:20.311 rmmod nvme_keyring 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 737659 ']' 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 737659 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 737659 ']' 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 737659 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 737659 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 737659' 00:30:20.311 killing process with pid 737659 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 737659 00:30:20.311 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 737659 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.576 12:44:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.118 00:30:23.118 real 0m7.155s 00:30:23.118 user 0m21.019s 00:30:23.118 sys 0m1.413s 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.118 ************************************ 00:30:23.118 END TEST nvmf_shutdown_tc2 00:30:23.118 ************************************ 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:23.118 ************************************ 00:30:23.118 START TEST nvmf_shutdown_tc3 00:30:23.118 ************************************ 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.118 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:23.119 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:23.119 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:23.119 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:23.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:30:23.119 00:30:23.119 --- 10.0.0.2 ping statistics --- 00:30:23.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.119 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:30:23.119 00:30:23.119 --- 10.0.0.1 ping statistics --- 00:30:23.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.119 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.119 12:44:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=738637 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 738637 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 738637 ']' 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.120 [2024-11-05 12:44:52.078092] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:23.120 [2024-11-05 12:44:52.078194] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.120 [2024-11-05 12:44:52.157123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.120 [2024-11-05 12:44:52.205363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.120 [2024-11-05 12:44:52.205437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.120 [2024-11-05 12:44:52.205465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.120 [2024-11-05 12:44:52.205484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.120 [2024-11-05 12:44:52.205494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.120 [2024-11-05 12:44:52.207023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.120 [2024-11-05 12:44:52.207097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.120 [2024-11-05 12:44:52.207169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:23.120 [2024-11-05 12:44:52.207186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.120 [2024-11-05 12:44:52.340125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.120 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.378 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.378 Malloc1 00:30:23.378 [2024-11-05 12:44:52.428560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.378 Malloc2 00:30:23.378 Malloc3 00:30:23.378 Malloc4 00:30:23.378 Malloc5 00:30:23.636 Malloc6 00:30:23.636 Malloc7 00:30:23.636 Malloc8 00:30:23.636 Malloc9 00:30:23.636 Malloc10 00:30:23.894 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.894 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:23.894 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:23.894 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.894 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=738811 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 738811 /var/tmp/bdevperf.sock 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 738811 ']' 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.895 "ddgst": ${ddgst:-false} 00:30:23.895 }, 00:30:23.895 "method": "bdev_nvme_attach_controller" 00:30:23.895 } 00:30:23.895 EOF 00:30:23.895 )") 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.895 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.895 { 00:30:23.895 "params": { 00:30:23.895 "name": "Nvme$subsystem", 00:30:23.895 "trtype": "$TEST_TRANSPORT", 00:30:23.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.895 "adrfam": "ipv4", 00:30:23.895 "trsvcid": "$NVMF_PORT", 00:30:23.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.895 "hdgst": ${hdgst:-false}, 00:30:23.896 "ddgst": ${ddgst:-false} 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 } 00:30:23.896 EOF 00:30:23.896 )") 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:23.896 { 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme$subsystem", 00:30:23.896 "trtype": "$TEST_TRANSPORT", 00:30:23.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "$NVMF_PORT", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.896 "hdgst": ${hdgst:-false}, 00:30:23.896 "ddgst": ${ddgst:-false} 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 } 00:30:23.896 EOF 00:30:23.896 )") 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:30:23.896 12:44:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme1", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme2", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme3", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme4", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme5", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme6", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme7", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme8", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme9", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 },{ 00:30:23.896 "params": { 00:30:23.896 "name": "Nvme10", 00:30:23.896 "trtype": "tcp", 00:30:23.896 "traddr": "10.0.0.2", 00:30:23.896 "adrfam": "ipv4", 00:30:23.896 "trsvcid": "4420", 00:30:23.896 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:23.896 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:23.896 "hdgst": false, 00:30:23.896 "ddgst": false 00:30:23.896 }, 00:30:23.896 "method": "bdev_nvme_attach_controller" 00:30:23.896 }' 00:30:23.896 [2024-11-05 12:44:52.955074] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:23.896 [2024-11-05 12:44:52.955153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738811 ] 00:30:23.896 [2024-11-05 12:44:53.027609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.896 [2024-11-05 12:44:53.075699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.793 Running I/O for 10 seconds... 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:25.793 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:26.051 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 738637 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 738637 ']' 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 738637 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 738637 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 738637' 00:30:26.326 killing process with pid 738637 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 738637 00:30:26.326 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 738637 00:30:26.326 [2024-11-05 12:44:55.416499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.326 [2024-11-05 12:44:55.416851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.416996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.417378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3070 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.418986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.327 [2024-11-05 12:44:55.419604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.419999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.420018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.420039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18262b0 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.421983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.422649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3540 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.328 [2024-11-05 12:44:55.424300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.424983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.425004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.425017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3a10 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.426980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0fb00 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.427212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b31c0 is same with the state(6) to be set 00:30:26.329 [2024-11-05 12:44:55.427403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.329 [2024-11-05 12:44:55.427479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.329 [2024-11-05 12:44:55.427499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.330 [2024-11-05 12:44:55.427513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-11-05 12:44:55.427525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3620 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.330 [2024-11-05 12:44:55.427593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-11-05 12:44:55.427608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.330 [2024-11-05 12:44:55.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-11-05 12:44:55.427634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.330 [2024-11-05 12:44:55.427648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-11-05 12:44:55.427662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.330 [2024-11-05 12:44:55.427674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-11-05 12:44:55.427691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0ed0 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.427995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.428929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b3f00 is same with the state(6) to be set 00:30:26.330 [2024-11-05 12:44:55.430871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.330 [2024-11-05 12:44:55.430901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.330 [2024-11-05 12:44:55.430930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.430946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.430963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.430984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.331 [2024-11-05 12:44:55.431954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.331 [2024-11-05 12:44:55.431968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.431981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.431996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.432329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.332 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:1[2024-11-05 12:44:55.432376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.432393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.332 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.432470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1the state(6) to be set 00:30:26.332 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.432558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:1the state(6) to be set 00:30:26.332 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-05 12:44:55.432577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.332 [2024-11-05 12:44:55.432606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.332 [2024-11-05 12:44:55.432609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.332 [2024-11-05 12:44:55.432618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.432630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.432643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.432655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.432668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.432680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:1the state(6) to be set 00:30:26.333 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.432695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.432696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.333 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.432708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.432721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.432733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.432746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.432761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.432779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.432793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.333 [2024-11-05 12:44:55.432830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.432994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.433036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:1the state(6) to be set 00:30:26.333 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.433053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.333 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with [2024-11-05 12:44:55.433129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:1the state(6) to be set 00:30:26.333 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b4d70 is same with the state(6) to be set 00:30:26.333 [2024-11-05 12:44:55.433187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.333 [2024-11-05 12:44:55.433374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.333 [2024-11-05 12:44:55.433395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.433975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.433989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5260 is same with [2024-11-05 12:44:55.434165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128the state(6) to be set 00:30:26.334 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5260 is same with the state(6) to be set 00:30:26.334 [2024-11-05 12:44:55.434195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12[2024-11-05 12:44:55.434197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b5260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 the state(6) to be set 00:30:26.334 [2024-11-05 12:44:55.434210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.334 [2024-11-05 12:44:55.434509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.334 [2024-11-05 12:44:55.434524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with [2024-11-05 12:44:55.434537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.335 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-11-05 12:44:55.434584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-05 12:44:55.434599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:12[2024-11-05 12:44:55.434651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with [2024-11-05 12:44:55.434665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.335 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-11-05 12:44:55.434777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with [2024-11-05 12:44:55.434791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:30:26.335 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with [2024-11-05 12:44:55.434897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:12the state(6) to be set 00:30:26.335 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.335 [2024-11-05 12:44:55.434938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.335 [2024-11-05 12:44:55.434956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.335 [2024-11-05 12:44:55.434969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.434981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.434998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.435368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1825de0 is same with the state(6) to be set 00:30:26.336 [2024-11-05 12:44:55.438136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.336 [2024-11-05 12:44:55.438842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.336 [2024-11-05 12:44:55.438857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.438881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.438896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.438910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.438924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.438938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.438957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.438970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.438986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.438999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.439825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.337 [2024-11-05 12:44:55.439838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.337 [2024-11-05 12:44:55.454360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.454413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.454429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.454443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.454459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.454473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.454489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.454502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.454518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.454531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.454547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.454561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.454932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:30:26.338 [2024-11-05 12:44:55.455034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:30:26.338 [2024-11-05 12:44:55.455114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d4260 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.455146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e4970 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.455210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be610 is same with the state(6) to be set 00:30:26.338 [2024-11-05 12:44:55.455362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0fb00 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.455418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a214d0 is same with the state(6) to be set 00:30:26.338 [2024-11-05 12:44:55.455557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b31c0 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.455611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d8680 is same with the state(6) to be set 00:30:26.338 [2024-11-05 12:44:55.455785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:26.338 [2024-11-05 12:44:55.455905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.455917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0e800 is same with the state(6) to be set 00:30:26.338 [2024-11-05 12:44:55.455948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b3620 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.455979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0ed0 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.457722] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:26.338 [2024-11-05 12:44:55.457764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:26.338 [2024-11-05 12:44:55.457791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a214d0 (9): Bad file descriptor 00:30:26.338 [2024-11-05 12:44:55.457895] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:26.338 [2024-11-05 12:44:55.457984] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:26.338 [2024-11-05 12:44:55.458916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-11-05 12:44:55.458949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e4970 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-11-05 12:44:55.458966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e4970 is same with the state(6) to be set 00:30:26.338 [2024-11-05 12:44:55.459065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-11-05 12:44:55.459090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d4260 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-11-05 12:44:55.459106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d4260 is same with the state(6) to be set 00:30:26.338 [2024-11-05 12:44:55.459206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.338 [2024-11-05 12:44:55.459433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.338 [2024-11-05 12:44:55.459448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.459974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.459990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.339 [2024-11-05 12:44:55.460640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.339 [2024-11-05 12:44:55.460653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.460981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.460995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.461024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.461052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.461081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.461110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.461148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.461182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.461196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8720 is same with the state(6) to be set 00:30:26.340 [2024-11-05 12:44:55.461362] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:26.340 [2024-11-05 12:44:55.461465] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:26.340 [2024-11-05 12:44:55.461932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-11-05 12:44:55.461961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a214d0 with addr=10.0.0.2, port=4420 00:30:26.340 [2024-11-05 12:44:55.461979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a214d0 is same with the state(6) to be set 00:30:26.340 [2024-11-05 12:44:55.461999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e4970 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.462020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d4260 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.463355] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:26.340 [2024-11-05 12:44:55.463395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:26.340 [2024-11-05 12:44:55.463437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a214d0 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.463458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:30:26.340 [2024-11-05 12:44:55.463472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:30:26.340 [2024-11-05 12:44:55.463496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:26.340 [2024-11-05 12:44:55.463511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:30:26.340 [2024-11-05 12:44:55.463527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:30:26.340 [2024-11-05 12:44:55.463541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:30:26.340 [2024-11-05 12:44:55.463554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:26.340 [2024-11-05 12:44:55.463566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:30:26.340 [2024-11-05 12:44:55.463728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-11-05 12:44:55.463755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b31c0 with addr=10.0.0.2, port=4420 00:30:26.340 [2024-11-05 12:44:55.463771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b31c0 is same with the state(6) to be set 00:30:26.340 [2024-11-05 12:44:55.463786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:26.340 [2024-11-05 12:44:55.463799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:26.340 [2024-11-05 12:44:55.463812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:26.340 [2024-11-05 12:44:55.463825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:26.340 [2024-11-05 12:44:55.464143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b31c0 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.464224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:26.340 [2024-11-05 12:44:55.464243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:26.340 [2024-11-05 12:44:55.464257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:26.340 [2024-11-05 12:44:55.464270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:26.340 [2024-11-05 12:44:55.464964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14be610 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.465022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d8680 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.465055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0e800 (9): Bad file descriptor 00:30:26.340 [2024-11-05 12:44:55.465220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.465266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.465297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.465326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.465355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.465383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.340 [2024-11-05 12:44:55.465414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.340 [2024-11-05 12:44:55.465435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.465984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.465998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.341 [2024-11-05 12:44:55.466377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.341 [2024-11-05 12:44:55.466391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.466979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.466995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.467008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.467023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.467040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.467056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.467069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.467085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.467098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.467113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.467126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.467142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.467165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.467178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bf590 is same with the state(6) to be set 00:30:26.342 [2024-11-05 12:44:55.468459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.342 [2024-11-05 12:44:55.468818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.342 [2024-11-05 12:44:55.468833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.468851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.468876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.468892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.468907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.468921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.468936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.468950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.468966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.468979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.468994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.469977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.469990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.470005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.470019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.343 [2024-11-05 12:44:55.470033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.343 [2024-11-05 12:44:55.470047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.470409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.470423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5900 is same with the state(6) to be set 00:30:26.344 [2024-11-05 12:44:55.471716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.471982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.471997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.344 [2024-11-05 12:44:55.472520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.344 [2024-11-05 12:44:55.472535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.472993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.345 [2024-11-05 12:44:55.473656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.345 [2024-11-05 12:44:55.473669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bab70 is same with the state(6) to be set 00:30:26.345 [2024-11-05 12:44:55.474908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:26.345 [2024-11-05 12:44:55.474940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:26.346 [2024-11-05 12:44:55.474958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:26.346 [2024-11-05 12:44:55.475376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-11-05 12:44:55.475407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b3620 with addr=10.0.0.2, port=4420 00:30:26.346 [2024-11-05 12:44:55.475424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3620 is same with the state(6) to be set 00:30:26.346 [2024-11-05 12:44:55.475524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-11-05 12:44:55.475549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0ed0 with addr=10.0.0.2, port=4420 00:30:26.346 [2024-11-05 12:44:55.475565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0ed0 is same with the state(6) to be set 00:30:26.346 [2024-11-05 12:44:55.475640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-11-05 12:44:55.475669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0fb00 with addr=10.0.0.2, port=4420 00:30:26.346 [2024-11-05 12:44:55.475686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0fb00 is same with the state(6) to be set 00:30:26.346 [2024-11-05 12:44:55.476555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:30:26.346 [2024-11-05 12:44:55.476584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:30:26.346 [2024-11-05 12:44:55.476602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:26.346 [2024-11-05 12:44:55.476618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:26.346 [2024-11-05 12:44:55.476682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b3620 (9): Bad file descriptor 00:30:26.346 [2024-11-05 12:44:55.476706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0ed0 (9): Bad file descriptor 00:30:26.346 [2024-11-05 12:44:55.476725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0fb00 (9): Bad file descriptor 00:30:26.346 [2024-11-05 12:44:55.476801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.476822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.476843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.476867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.476887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.476901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.476916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.476930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.476945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.476959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.476974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.476987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.346 [2024-11-05 12:44:55.477598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.346 [2024-11-05 12:44:55.477613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.477967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.477986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.478720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b2b80 is same with the state(6) to be set 00:30:26.347 [2024-11-05 12:44:55.479974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.479997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.480018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.480034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.347 [2024-11-05 12:44:55.480049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.347 [2024-11-05 12:44:55.480063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.480841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.480869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.348 [2024-11-05 12:44:55.490443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.348 [2024-11-05 12:44:55.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.490978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.490993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.491006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.491022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.491035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.491050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.491064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.491079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.491092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.491108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.491121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.491137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.491150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.491170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b6ae0 is same with the state(6) to be set 00:30:26.349 [2024-11-05 12:44:55.492565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.492985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.492998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.493014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.493027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.493042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.349 [2024-11-05 12:44:55.493055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.349 [2024-11-05 12:44:55.493070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.493985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.493998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.494013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.494026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.494041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.494054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.494069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.494082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.494097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.350 [2024-11-05 12:44:55.494110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.350 [2024-11-05 12:44:55.494125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.351 [2024-11-05 12:44:55.494479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:26.351 [2024-11-05 12:44:55.494501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b9600 is same with the state(6) to be set 00:30:26.351 [2024-11-05 12:44:55.496178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:26.351 [2024-11-05 12:44:55.496212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:26.351 task offset: 16768 on job bdev=Nvme5n1 fails 00:30:26.351 00:30:26.351 Latency(us) 00:30:26.351 [2024-11-05T11:44:55.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.351 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme1n1 ended in about 0.77 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme1n1 : 0.77 166.87 10.43 83.44 0.00 252391.98 32428.18 217482.43 00:30:26.351 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme2n1 ended in about 0.76 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme2n1 : 0.76 167.99 10.50 84.00 0.00 244584.55 17767.54 253211.69 00:30:26.351 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme3n1 ended in about 0.77 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme3n1 : 0.77 166.17 10.39 83.09 0.00 241290.56 16990.81 253211.69 00:30:26.351 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme4n1 ended in about 0.78 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme4n1 : 0.78 164.41 10.28 82.20 0.00 238036.26 19320.98 219035.88 00:30:26.351 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme5n1 ended in about 0.74 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme5n1 : 0.74 173.98 10.87 86.99 0.00 217677.68 5291.43 259425.47 00:30:26.351 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme6n1 ended in about 0.74 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme6n1 : 0.74 173.71 10.86 86.86 0.00 211969.90 7281.78 256318.58 00:30:26.351 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme7n1 ended in about 0.79 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme7n1 : 0.79 161.81 10.11 80.91 0.00 223979.14 20680.25 253211.69 00:30:26.351 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme8n1 ended in about 0.76 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme8n1 : 0.76 169.24 10.58 84.62 0.00 206349.65 21262.79 226803.11 00:30:26.351 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme9n1 ended in about 0.79 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme9n1 : 0.79 80.58 5.04 80.58 0.00 319788.37 34758.35 284280.60 00:30:26.351 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:26.351 Job: Nvme10n1 ended in about 0.77 seconds with error 00:30:26.351 Verification LBA range: start 0x0 length 0x400 00:30:26.351 Nvme10n1 : 0.77 89.20 5.58 82.74 0.00 289449.02 19223.89 281173.71 00:30:26.351 [2024-11-05T11:44:55.589Z] =================================================================================================================== 00:30:26.351 [2024-11-05T11:44:55.589Z] Total : 1513.97 94.62 835.41 0.00 240398.07 5291.43 284280.60 00:30:26.351 [2024-11-05 12:44:55.522192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:26.351 [2024-11-05 12:44:55.522280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:30:26.351 [2024-11-05 12:44:55.522556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.351 [2024-11-05 12:44:55.522594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d4260 with addr=10.0.0.2, port=4420 00:30:26.351 [2024-11-05 12:44:55.522614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d4260 is same with the state(6) to be set 00:30:26.351 [2024-11-05 12:44:55.522713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.351 [2024-11-05 12:44:55.522739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e4970 with addr=10.0.0.2, port=4420 00:30:26.351 [2024-11-05 12:44:55.522755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e4970 is same with the state(6) to be set 00:30:26.351 [2024-11-05 12:44:55.522828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.351 [2024-11-05 12:44:55.522853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a214d0 with addr=10.0.0.2, port=4420 00:30:26.351 [2024-11-05 12:44:55.522881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a214d0 is same with the state(6) to be set 00:30:26.351 [2024-11-05 12:44:55.522985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.351 [2024-11-05 12:44:55.523010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b31c0 with addr=10.0.0.2, port=4420 00:30:26.351 [2024-11-05 12:44:55.523026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b31c0 is same with the state(6) to be set 00:30:26.351 [2024-11-05 12:44:55.523041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:26.351 [2024-11-05 12:44:55.523054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:26.351 [2024-11-05 12:44:55.523070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:26.351 [2024-11-05 12:44:55.523088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:26.351 [2024-11-05 12:44:55.523106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:26.351 [2024-11-05 12:44:55.523119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:26.351 [2024-11-05 12:44:55.523132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:26.351 [2024-11-05 12:44:55.523145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:26.351 [2024-11-05 12:44:55.523159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:26.351 [2024-11-05 12:44:55.523176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:26.351 [2024-11-05 12:44:55.523189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:26.351 [2024-11-05 12:44:55.523201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:26.351 [2024-11-05 12:44:55.523300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b31c0 (9): Bad file descriptor 00:30:26.351 [2024-11-05 12:44:55.523335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a214d0 (9): Bad file descriptor 00:30:26.351 [2024-11-05 12:44:55.523360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e4970 (9): Bad file descriptor 00:30:26.351 [2024-11-05 12:44:55.523384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d4260 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.523683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.352 [2024-11-05 12:44:55.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d8680 with addr=10.0.0.2, port=4420 00:30:26.352 [2024-11-05 12:44:55.523728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d8680 is same with the state(6) to be set 00:30:26.352 [2024-11-05 12:44:55.523826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.352 [2024-11-05 12:44:55.523866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14be610 with addr=10.0.0.2, port=4420 00:30:26.352 [2024-11-05 12:44:55.523884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14be610 is same with the state(6) to be set 00:30:26.352 [2024-11-05 12:44:55.523954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.352 [2024-11-05 12:44:55.523979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0e800 with addr=10.0.0.2, port=4420 00:30:26.352 [2024-11-05 12:44:55.523995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0e800 is same with the state(6) to be set 00:30:26.352 [2024-11-05 12:44:55.524050] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.524079] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.524100] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.524120] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.524140] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.524162] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.524181] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:30:26.352 [2024-11-05 12:44:55.525021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:26.352 [2024-11-05 12:44:55.525049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:26.352 [2024-11-05 12:44:55.525067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:26.352 [2024-11-05 12:44:55.525129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d8680 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.525162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14be610 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.525180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0e800 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.525196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.525208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.525221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.525234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.525252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.525264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.525277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.525288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.525302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.525313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.525325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.525337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.525350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.525362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.525374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.525385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.525768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.352 [2024-11-05 12:44:55.525797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0fb00 with addr=10.0.0.2, port=4420 00:30:26.352 [2024-11-05 12:44:55.525814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0fb00 is same with the state(6) to be set 00:30:26.352 [2024-11-05 12:44:55.525918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.352 [2024-11-05 12:44:55.525944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b0ed0 with addr=10.0.0.2, port=4420 00:30:26.352 [2024-11-05 12:44:55.525960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b0ed0 is same with the state(6) to be set 00:30:26.352 [2024-11-05 12:44:55.526046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.352 [2024-11-05 12:44:55.526071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b3620 with addr=10.0.0.2, port=4420 00:30:26.352 [2024-11-05 12:44:55.526086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3620 is same with the state(6) to be set 00:30:26.352 [2024-11-05 12:44:55.526101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.526113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.526126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.526140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.526154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.526166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.526179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.526190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.526204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.526216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.526229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.526241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.526308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0fb00 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.526333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b0ed0 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.526352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b3620 (9): Bad file descriptor 00:30:26.352 [2024-11-05 12:44:55.526394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.526412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.526426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.526438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.526453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.526470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.526483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.526495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:26.352 [2024-11-05 12:44:55.526508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:26.352 [2024-11-05 12:44:55.526524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:26.352 [2024-11-05 12:44:55.526555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:26.352 [2024-11-05 12:44:55.526574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:26.920 12:44:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 738811 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 738811 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 738811 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:30:27.855 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.856 rmmod nvme_tcp 00:30:27.856 rmmod nvme_fabrics 00:30:27.856 rmmod nvme_keyring 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 738637 ']' 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 738637 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 738637 ']' 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 738637 00:30:27.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (738637) - No such process 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 738637 is not found' 00:30:27.856 Process with pid 738637 is not found 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.856 12:44:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.419 00:30:30.419 real 0m7.193s 00:30:30.419 user 0m17.219s 00:30:30.419 sys 0m1.364s 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.419 ************************************ 00:30:30.419 END TEST nvmf_shutdown_tc3 00:30:30.419 ************************************ 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:30.419 ************************************ 00:30:30.419 START TEST nvmf_shutdown_tc4 00:30:30.419 ************************************ 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:30.419 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:30.420 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:30.420 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:30.420 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:30.420 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:30.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:30:30.420 00:30:30.420 --- 10.0.0.2 ping statistics --- 00:30:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.420 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:30:30.420 00:30:30.420 --- 10.0.0.1 ping statistics --- 00:30:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.420 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=739720 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 739720 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 739720 ']' 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:30.420 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.420 [2024-11-05 12:44:59.355095] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:30.421 [2024-11-05 12:44:59.355186] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.421 [2024-11-05 12:44:59.428034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.421 [2024-11-05 12:44:59.473671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.421 [2024-11-05 12:44:59.473724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.421 [2024-11-05 12:44:59.473753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.421 [2024-11-05 12:44:59.473764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.421 [2024-11-05 12:44:59.473774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.421 [2024-11-05 12:44:59.475268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.421 [2024-11-05 12:44:59.475334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.421 [2024-11-05 12:44:59.475401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:30.421 [2024-11-05 12:44:59.475405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.421 [2024-11-05 12:44:59.621175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.421 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.679 12:44:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:30.679 Malloc1 00:30:30.679 [2024-11-05 12:44:59.723074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.679 Malloc2 00:30:30.679 Malloc3 00:30:30.679 Malloc4 00:30:30.679 Malloc5 00:30:30.937 Malloc6 00:30:30.937 Malloc7 00:30:30.937 Malloc8 00:30:30.937 Malloc9 00:30:30.937 Malloc10 00:30:30.937 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.937 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:30.937 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.937 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:31.193 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=739918 00:30:31.193 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:31.193 12:45:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:31.193 [2024-11-05 12:45:00.245353] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 739720 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 739720 ']' 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 739720 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 739720 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 739720' 00:30:36.461 killing process with pid 739720 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 739720 00:30:36.461 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 739720 00:30:36.461 [2024-11-05 12:45:05.232070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295400 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.232166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295400 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.232183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295400 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1295dc0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.233990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.234001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.234012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294f30 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.235152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1296780 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.235181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1296780 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.235196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1296780 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.236601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297140 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.236631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297140 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.236645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297140 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.236657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297140 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.236669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297140 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.236681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297140 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.237917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.237949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.237965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.237977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.237989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.238001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.238013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12962b0 is same with the state(6) to be set 00:30:36.461 [2024-11-05 12:45:05.240641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14eee00 is same with the state(6) to be set 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 starting I/O failed: -6 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 starting I/O failed: -6 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.461 starting I/O failed: -6 00:30:36.461 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 [2024-11-05 12:45:05.241735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.462 starting I/O failed: -6 00:30:36.462 starting I/O failed: -6 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 [2024-11-05 12:45:05.242944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 [2024-11-05 12:45:05.244124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.462 NVMe io qpair process completion error 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 Write completed with error (sct=0, sc=8) 00:30:36.462 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.245326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297ae0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 [2024-11-05 12:45:05.246057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297ae0 is same with the state(6) to be set 00:30:36.463 [2024-11-05 12:45:05.246073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297ae0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297ae0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 [2024-11-05 12:45:05.246098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297ae0 is same with the state(6) to be set 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297ae0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 [2024-11-05 12:45:05.246779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15035a0 is same with the state(6) to be set 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 [2024-11-05 12:45:05.246811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15035a0 is same with the state(6) to be set 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15035a0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15035a0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 [2024-11-05 12:45:05.246876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15035a0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.246891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15035a0 is same with the state(6) to be set 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 [2024-11-05 12:45:05.247422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.463 starting I/O failed: -6 00:30:36.463 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 [2024-11-05 12:45:05.249330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.464 NVMe io qpair process completion error 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 [2024-11-05 12:45:05.250592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.464 starting I/O failed: -6 00:30:36.464 starting I/O failed: -6 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.464 Write completed with error (sct=0, sc=8) 00:30:36.464 starting I/O failed: -6 00:30:36.465 [2024-11-05 12:45:05.251669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 [2024-11-05 12:45:05.252810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 [2024-11-05 12:45:05.255328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.465 NVMe io qpair process completion error 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.465 starting I/O failed: -6 00:30:36.465 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 [2024-11-05 12:45:05.256499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 [2024-11-05 12:45:05.257571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.466 starting I/O failed: -6 00:30:36.466 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 [2024-11-05 12:45:05.258746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 [2024-11-05 12:45:05.260381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.467 NVMe io qpair process completion error 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 [2024-11-05 12:45:05.261512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.467 starting I/O failed: -6 00:30:36.467 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 [2024-11-05 12:45:05.262501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 [2024-11-05 12:45:05.263645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.468 Write completed with error (sct=0, sc=8) 00:30:36.468 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 [2024-11-05 12:45:05.265419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.469 NVMe io qpair process completion error 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 [2024-11-05 12:45:05.266815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.469 starting I/O failed: -6 00:30:36.469 starting I/O failed: -6 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 [2024-11-05 12:45:05.267943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.469 Write completed with error (sct=0, sc=8) 00:30:36.469 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 [2024-11-05 12:45:05.270025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 [2024-11-05 12:45:05.272328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.470 NVMe io qpair process completion error 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 starting I/O failed: -6 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.470 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 starting I/O failed: -6 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 [2024-11-05 12:45:05.275448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.471 starting I/O failed: -6 00:30:36.471 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 [2024-11-05 12:45:05.278594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.472 NVMe io qpair process completion error 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 starting I/O failed: -6 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.472 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 [2024-11-05 12:45:05.283147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.473 NVMe io qpair process completion error 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 [2024-11-05 12:45:05.284401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 starting I/O failed: -6 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.473 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 [2024-11-05 12:45:05.285429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 [2024-11-05 12:45:05.286590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.474 starting I/O failed: -6 00:30:36.474 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 [2024-11-05 12:45:05.288447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.475 NVMe io qpair process completion error 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 [2024-11-05 12:45:05.289696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 [2024-11-05 12:45:05.290685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.475 Write completed with error (sct=0, sc=8) 00:30:36.475 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 [2024-11-05 12:45:05.291842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 starting I/O failed: -6 00:30:36.476 [2024-11-05 12:45:05.294177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:36.476 NVMe io qpair process completion error 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.476 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Write completed with error (sct=0, sc=8) 00:30:36.477 Initializing NVMe Controllers 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:36.477 Controller IO queue size 128, less than required. 00:30:36.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:36.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:36.477 Initialization complete. Launching workers. 00:30:36.477 ======================================================== 00:30:36.477 Latency(us) 00:30:36.477 Device Information : IOPS MiB/s Average min max 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1797.78 77.25 71216.78 1070.10 128228.12 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1787.30 76.80 71653.54 1007.14 127860.46 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1828.14 78.55 70085.29 790.17 125979.51 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1852.95 79.62 69177.61 894.85 128621.65 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1800.34 77.36 71230.28 624.00 131945.44 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1802.48 77.45 71169.26 870.22 133681.50 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1785.80 76.73 71857.52 1023.11 135917.31 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1730.63 74.36 73718.60 621.91 121884.35 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1771.26 76.11 71647.02 991.13 119922.57 00:30:36.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1774.90 76.27 71527.46 794.10 120692.85 00:30:36.477 ======================================================== 00:30:36.477 Total : 17931.58 770.50 71309.34 621.91 135917.31 00:30:36.477 00:30:36.477 [2024-11-05 12:45:05.303257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f140 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2d6a0 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f470 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe32b30 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2cfb0 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2d9d0 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2f7a0 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2d190 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2d370 is same with the state(6) to be set 00:30:36.477 [2024-11-05 12:45:05.303852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2ee10 is same with the state(6) to be set 00:30:36.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:36.477 12:45:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 739918 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 739918 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 739918 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.852 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.853 rmmod nvme_tcp 00:30:37.853 rmmod nvme_fabrics 00:30:37.853 rmmod nvme_keyring 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 739720 ']' 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 739720 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 739720 ']' 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 739720 00:30:37.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (739720) - No such process 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 739720 is not found' 00:30:37.853 Process with pid 739720 is not found 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.853 12:45:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.750 00:30:39.750 real 0m9.703s 00:30:39.750 user 0m22.091s 00:30:39.750 sys 0m6.164s 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:39.750 ************************************ 00:30:39.750 END TEST nvmf_shutdown_tc4 00:30:39.750 ************************************ 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:39.750 00:30:39.750 real 0m36.481s 00:30:39.750 user 1m35.744s 00:30:39.750 sys 0m12.455s 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:39.750 ************************************ 00:30:39.750 END TEST nvmf_shutdown 00:30:39.750 ************************************ 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:39.750 ************************************ 00:30:39.750 START TEST nvmf_nsid 00:30:39.750 ************************************ 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:39.750 * Looking for test storage... 00:30:39.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:30:39.750 12:45:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.009 --rc genhtml_branch_coverage=1 00:30:40.009 --rc genhtml_function_coverage=1 00:30:40.009 --rc genhtml_legend=1 00:30:40.009 --rc geninfo_all_blocks=1 00:30:40.009 --rc geninfo_unexecuted_blocks=1 00:30:40.009 00:30:40.009 ' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.009 --rc genhtml_branch_coverage=1 00:30:40.009 --rc genhtml_function_coverage=1 00:30:40.009 --rc genhtml_legend=1 00:30:40.009 --rc geninfo_all_blocks=1 00:30:40.009 --rc geninfo_unexecuted_blocks=1 00:30:40.009 00:30:40.009 ' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.009 --rc genhtml_branch_coverage=1 00:30:40.009 --rc genhtml_function_coverage=1 00:30:40.009 --rc genhtml_legend=1 00:30:40.009 --rc geninfo_all_blocks=1 00:30:40.009 --rc geninfo_unexecuted_blocks=1 00:30:40.009 00:30:40.009 ' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.009 --rc genhtml_branch_coverage=1 00:30:40.009 --rc genhtml_function_coverage=1 00:30:40.009 --rc genhtml_legend=1 00:30:40.009 --rc geninfo_all_blocks=1 00:30:40.009 --rc geninfo_unexecuted_blocks=1 00:30:40.009 00:30:40.009 ' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.009 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:40.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.010 12:45:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.915 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:41.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:41.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:41.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:41.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.916 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:30:42.174 00:30:42.174 --- 10.0.0.2 ping statistics --- 00:30:42.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.174 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:30:42.174 00:30:42.174 --- 10.0.0.1 ping statistics --- 00:30:42.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.174 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=743133 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 743133 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 743133 ']' 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:42.174 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:42.174 [2024-11-05 12:45:11.295509] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:42.174 [2024-11-05 12:45:11.295581] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.174 [2024-11-05 12:45:11.367526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.174 [2024-11-05 12:45:11.412094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.174 [2024-11-05 12:45:11.412166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.174 [2024-11-05 12:45:11.412180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.174 [2024-11-05 12:45:11.412191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.174 [2024-11-05 12:45:11.412202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.174 [2024-11-05 12:45:11.412822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=743207 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:42.432 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=20c1d8af-14f0-4d19-ba7c-b2e0706e0319 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1a596214-eda7-4939-b4ad-ce4d5d21ff60 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=877aac41-a4fc-453e-9596-80aaca3632c1 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:42.433 null0 00:30:42.433 null1 00:30:42.433 null2 00:30:42.433 [2024-11-05 12:45:11.594779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.433 [2024-11-05 12:45:11.604643] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:42.433 [2024-11-05 12:45:11.604719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743207 ] 00:30:42.433 [2024-11-05 12:45:11.619022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 743207 /var/tmp/tgt2.sock 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 743207 ']' 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:42.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:42.433 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:42.433 [2024-11-05 12:45:11.673474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.691 [2024-11-05 12:45:11.719854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.948 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.948 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:30:42.948 12:45:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:43.206 [2024-11-05 12:45:12.352356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.206 [2024-11-05 12:45:12.368560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:43.206 nvme0n1 nvme0n2 00:30:43.206 nvme1n1 00:30:43.206 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:43.206 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:43.206 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:30:43.771 12:45:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:30:45.143 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:45.143 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:30:45.143 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:30:45.143 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:30:45.143 12:45:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 20c1d8af-14f0-4d19-ba7c-b2e0706e0319 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=20c1d8af14f04d19ba7cb2e0706e0319 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 20C1D8AF14F04D19BA7CB2E0706E0319 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 20C1D8AF14F04D19BA7CB2E0706E0319 == \2\0\C\1\D\8\A\F\1\4\F\0\4\D\1\9\B\A\7\C\B\2\E\0\7\0\6\E\0\3\1\9 ]] 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1a596214-eda7-4939-b4ad-ce4d5d21ff60 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1a596214eda74939b4adce4d5d21ff60 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1A596214EDA74939B4ADCE4D5D21FF60 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1A596214EDA74939B4ADCE4D5D21FF60 == \1\A\5\9\6\2\1\4\E\D\A\7\4\9\3\9\B\4\A\D\C\E\4\D\5\D\2\1\F\F\6\0 ]] 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 877aac41-a4fc-453e-9596-80aaca3632c1 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:45.143 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=877aac41a4fc453e959680aaca3632c1 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 877AAC41A4FC453E959680AACA3632C1 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 877AAC41A4FC453E959680AACA3632C1 == \8\7\7\A\A\C\4\1\A\4\F\C\4\5\3\E\9\5\9\6\8\0\A\A\C\A\3\6\3\2\C\1 ]] 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 743207 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 743207 ']' 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 743207 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 743207 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 743207' 00:30:45.144 killing process with pid 743207 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 743207 00:30:45.144 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 743207 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.709 rmmod nvme_tcp 00:30:45.709 rmmod nvme_fabrics 00:30:45.709 rmmod nvme_keyring 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 743133 ']' 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 743133 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 743133 ']' 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 743133 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 743133 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 743133' 00:30:45.709 killing process with pid 743133 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 743133 00:30:45.709 12:45:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 743133 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.966 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.967 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.967 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.967 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.967 12:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.867 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.867 00:30:47.867 real 0m8.201s 00:30:47.867 user 0m7.944s 00:30:47.867 sys 0m2.660s 00:30:47.867 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.867 12:45:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 ************************************ 00:30:47.867 END TEST nvmf_nsid 00:30:47.867 ************************************ 00:30:47.867 12:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:47.867 00:30:47.867 real 17m59.028s 00:30:47.867 user 49m55.670s 00:30:47.867 sys 4m1.242s 00:30:47.867 12:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.867 12:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:47.867 ************************************ 00:30:47.867 END TEST nvmf_target_extra 00:30:47.867 ************************************ 00:30:48.125 12:45:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:48.125 12:45:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:48.125 12:45:17 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:48.125 12:45:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.125 ************************************ 00:30:48.125 START TEST nvmf_host 00:30:48.125 ************************************ 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:48.125 * Looking for test storage... 00:30:48.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:48.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.125 --rc genhtml_branch_coverage=1 00:30:48.125 --rc genhtml_function_coverage=1 00:30:48.125 --rc genhtml_legend=1 00:30:48.125 --rc geninfo_all_blocks=1 00:30:48.125 --rc geninfo_unexecuted_blocks=1 00:30:48.125 00:30:48.125 ' 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:48.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.125 --rc genhtml_branch_coverage=1 00:30:48.125 --rc genhtml_function_coverage=1 00:30:48.125 --rc genhtml_legend=1 00:30:48.125 --rc geninfo_all_blocks=1 00:30:48.125 --rc geninfo_unexecuted_blocks=1 00:30:48.125 00:30:48.125 ' 00:30:48.125 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:48.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.125 --rc genhtml_branch_coverage=1 00:30:48.125 --rc genhtml_function_coverage=1 00:30:48.125 --rc genhtml_legend=1 00:30:48.125 --rc geninfo_all_blocks=1 00:30:48.125 --rc geninfo_unexecuted_blocks=1 00:30:48.125 00:30:48.125 ' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:48.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.126 --rc genhtml_branch_coverage=1 00:30:48.126 --rc genhtml_function_coverage=1 00:30:48.126 --rc genhtml_legend=1 00:30:48.126 --rc geninfo_all_blocks=1 00:30:48.126 --rc geninfo_unexecuted_blocks=1 00:30:48.126 00:30:48.126 ' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:48.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:48.126 ************************************ 00:30:48.126 START TEST nvmf_multicontroller 00:30:48.126 ************************************ 00:30:48.126 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:48.385 * Looking for test storage... 00:30:48.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:48.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.385 --rc genhtml_branch_coverage=1 00:30:48.385 --rc genhtml_function_coverage=1 00:30:48.385 --rc genhtml_legend=1 00:30:48.385 --rc geninfo_all_blocks=1 00:30:48.385 --rc geninfo_unexecuted_blocks=1 00:30:48.385 00:30:48.385 ' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:48.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.385 --rc genhtml_branch_coverage=1 00:30:48.385 --rc genhtml_function_coverage=1 00:30:48.385 --rc genhtml_legend=1 00:30:48.385 --rc geninfo_all_blocks=1 00:30:48.385 --rc geninfo_unexecuted_blocks=1 00:30:48.385 00:30:48.385 ' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:48.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.385 --rc genhtml_branch_coverage=1 00:30:48.385 --rc genhtml_function_coverage=1 00:30:48.385 --rc genhtml_legend=1 00:30:48.385 --rc geninfo_all_blocks=1 00:30:48.385 --rc geninfo_unexecuted_blocks=1 00:30:48.385 00:30:48.385 ' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:48.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.385 --rc genhtml_branch_coverage=1 00:30:48.385 --rc genhtml_function_coverage=1 00:30:48.385 --rc genhtml_legend=1 00:30:48.385 --rc geninfo_all_blocks=1 00:30:48.385 --rc geninfo_unexecuted_blocks=1 00:30:48.385 00:30:48.385 ' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:48.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.385 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.386 12:45:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:50.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:50.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:50.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:50.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:30:50.915 00:30:50.915 --- 10.0.0.2 ping statistics --- 00:30:50.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.915 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:30:50.915 00:30:50.915 --- 10.0.0.1 ping statistics --- 00:30:50.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.915 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:50.915 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=745713 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 745713 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 745713 ']' 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:50.916 12:45:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:50.916 [2024-11-05 12:45:19.860848] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:50.916 [2024-11-05 12:45:19.860928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.916 [2024-11-05 12:45:19.933955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.916 [2024-11-05 12:45:19.985146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.916 [2024-11-05 12:45:19.985213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.916 [2024-11-05 12:45:19.985243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.916 [2024-11-05 12:45:19.985254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.916 [2024-11-05 12:45:19.985264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.916 [2024-11-05 12:45:19.986782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.916 [2024-11-05 12:45:19.986849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.916 [2024-11-05 12:45:19.986855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:50.916 [2024-11-05 12:45:20.145301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.916 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.173 Malloc0 00:30:51.173 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.173 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.173 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.173 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.173 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.173 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 [2024-11-05 12:45:20.209262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 [2024-11-05 12:45:20.217058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 Malloc1 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=745737 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 745737 /var/tmp/bdevperf.sock 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 745737 ']' 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:51.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:51.174 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.432 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:51.432 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:30:51.432 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:51.432 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.432 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.689 NVMe0n1 00:30:51.689 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.689 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:51.689 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.690 1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.690 request: 00:30:51.690 { 00:30:51.690 "name": "NVMe0", 00:30:51.690 "trtype": "tcp", 00:30:51.690 "traddr": "10.0.0.2", 00:30:51.690 "adrfam": "ipv4", 00:30:51.690 "trsvcid": "4420", 00:30:51.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.690 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:51.690 "hostaddr": "10.0.0.1", 00:30:51.690 "prchk_reftag": false, 00:30:51.690 "prchk_guard": false, 00:30:51.690 "hdgst": false, 00:30:51.690 "ddgst": false, 00:30:51.690 "allow_unrecognized_csi": false, 00:30:51.690 "method": "bdev_nvme_attach_controller", 00:30:51.690 "req_id": 1 00:30:51.690 } 00:30:51.690 Got JSON-RPC error response 00:30:51.690 response: 00:30:51.690 { 00:30:51.690 "code": -114, 00:30:51.690 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:51.690 } 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.690 request: 00:30:51.690 { 00:30:51.690 "name": "NVMe0", 00:30:51.690 "trtype": "tcp", 00:30:51.690 "traddr": "10.0.0.2", 00:30:51.690 "adrfam": "ipv4", 00:30:51.690 "trsvcid": "4420", 00:30:51.690 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:51.690 "hostaddr": "10.0.0.1", 00:30:51.690 "prchk_reftag": false, 00:30:51.690 "prchk_guard": false, 00:30:51.690 "hdgst": false, 00:30:51.690 "ddgst": false, 00:30:51.690 "allow_unrecognized_csi": false, 00:30:51.690 "method": "bdev_nvme_attach_controller", 00:30:51.690 "req_id": 1 00:30:51.690 } 00:30:51.690 Got JSON-RPC error response 00:30:51.690 response: 00:30:51.690 { 00:30:51.690 "code": -114, 00:30:51.690 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:51.690 } 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.690 request: 00:30:51.690 { 00:30:51.690 "name": "NVMe0", 00:30:51.690 "trtype": "tcp", 00:30:51.690 "traddr": "10.0.0.2", 00:30:51.690 "adrfam": "ipv4", 00:30:51.690 "trsvcid": "4420", 00:30:51.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.690 "hostaddr": "10.0.0.1", 00:30:51.690 "prchk_reftag": false, 00:30:51.690 "prchk_guard": false, 00:30:51.690 "hdgst": false, 00:30:51.690 "ddgst": false, 00:30:51.690 "multipath": "disable", 00:30:51.690 "allow_unrecognized_csi": false, 00:30:51.690 "method": "bdev_nvme_attach_controller", 00:30:51.690 "req_id": 1 00:30:51.690 } 00:30:51.690 Got JSON-RPC error response 00:30:51.690 response: 00:30:51.690 { 00:30:51.690 "code": -114, 00:30:51.690 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:51.690 } 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.690 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.690 request: 00:30:51.690 { 00:30:51.690 "name": "NVMe0", 00:30:51.690 "trtype": "tcp", 00:30:51.690 "traddr": "10.0.0.2", 00:30:51.690 "adrfam": "ipv4", 00:30:51.690 "trsvcid": "4420", 00:30:51.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.690 "hostaddr": "10.0.0.1", 00:30:51.690 "prchk_reftag": false, 00:30:51.690 "prchk_guard": false, 00:30:51.690 "hdgst": false, 00:30:51.690 "ddgst": false, 00:30:51.690 "multipath": "failover", 00:30:51.690 "allow_unrecognized_csi": false, 00:30:51.690 "method": "bdev_nvme_attach_controller", 00:30:51.690 "req_id": 1 00:30:51.690 } 00:30:51.690 Got JSON-RPC error response 00:30:51.690 response: 00:30:51.690 { 00:30:51.690 "code": -114, 00:30:51.691 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:51.691 } 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.691 12:45:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.948 NVMe0n1 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.948 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:52.205 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:52.205 12:45:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.577 { 00:30:53.577 "results": [ 00:30:53.577 { 00:30:53.577 "job": "NVMe0n1", 00:30:53.577 "core_mask": "0x1", 00:30:53.577 "workload": "write", 00:30:53.577 "status": "finished", 00:30:53.577 "queue_depth": 128, 00:30:53.577 "io_size": 4096, 00:30:53.577 "runtime": 1.008044, 00:30:53.577 "iops": 18647.995524004906, 00:30:53.577 "mibps": 72.84373251564416, 00:30:53.577 "io_failed": 0, 00:30:53.577 "io_timeout": 0, 00:30:53.577 "avg_latency_us": 6849.759718173329, 00:30:53.577 "min_latency_us": 4150.613333333334, 00:30:53.577 "max_latency_us": 15922.82074074074 00:30:53.577 } 00:30:53.577 ], 00:30:53.577 "core_count": 1 00:30:53.577 } 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 745737 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 745737 ']' 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 745737 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 745737 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 745737' 00:30:53.577 killing process with pid 745737 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 745737 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 745737 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:30:53.577 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:30:53.577 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:53.577 [2024-11-05 12:45:20.326220] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:53.577 [2024-11-05 12:45:20.326315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745737 ] 00:30:53.577 [2024-11-05 12:45:20.396158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.577 [2024-11-05 12:45:20.444769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.577 [2024-11-05 12:45:21.321277] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 267c7130-10da-41f9-a3e1-ecc25b0a73f8 already exists 00:30:53.577 [2024-11-05 12:45:21.321313] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:267c7130-10da-41f9-a3e1-ecc25b0a73f8 alias for bdev NVMe1n1 00:30:53.577 [2024-11-05 12:45:21.321329] bdev_nvme.c:4656:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:53.577 Running I/O for 1 seconds... 00:30:53.577 18577.00 IOPS, 72.57 MiB/s 00:30:53.577 Latency(us) 00:30:53.578 [2024-11-05T11:45:22.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.578 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:53.578 NVMe0n1 : 1.01 18648.00 72.84 0.00 0.00 6849.76 4150.61 15922.82 00:30:53.578 [2024-11-05T11:45:22.816Z] =================================================================================================================== 00:30:53.578 [2024-11-05T11:45:22.816Z] Total : 18648.00 72.84 0.00 0.00 6849.76 4150.61 15922.82 00:30:53.578 Received shutdown signal, test time was about 1.000000 seconds 00:30:53.578 00:30:53.578 Latency(us) 00:30:53.578 [2024-11-05T11:45:22.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.578 [2024-11-05T11:45:22.816Z] =================================================================================================================== 00:30:53.578 [2024-11-05T11:45:22.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:53.578 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.578 rmmod nvme_tcp 00:30:53.578 rmmod nvme_fabrics 00:30:53.578 rmmod nvme_keyring 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 745713 ']' 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 745713 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 745713 ']' 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 745713 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:53.578 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 745713 00:30:53.836 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:53.836 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:53.836 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 745713' 00:30:53.836 killing process with pid 745713 00:30:53.836 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 745713 00:30:53.836 12:45:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 745713 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.836 12:45:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.372 00:30:56.372 real 0m7.767s 00:30:56.372 user 0m12.581s 00:30:56.372 sys 0m2.417s 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:56.372 ************************************ 00:30:56.372 END TEST nvmf_multicontroller 00:30:56.372 ************************************ 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.372 ************************************ 00:30:56.372 START TEST nvmf_aer 00:30:56.372 ************************************ 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:56.372 * Looking for test storage... 00:30:56.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.372 --rc genhtml_branch_coverage=1 00:30:56.372 --rc genhtml_function_coverage=1 00:30:56.372 --rc genhtml_legend=1 00:30:56.372 --rc geninfo_all_blocks=1 00:30:56.372 --rc geninfo_unexecuted_blocks=1 00:30:56.372 00:30:56.372 ' 00:30:56.372 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:56.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.372 --rc genhtml_branch_coverage=1 00:30:56.372 --rc genhtml_function_coverage=1 00:30:56.372 --rc genhtml_legend=1 00:30:56.372 --rc geninfo_all_blocks=1 00:30:56.373 --rc geninfo_unexecuted_blocks=1 00:30:56.373 00:30:56.373 ' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:56.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.373 --rc genhtml_branch_coverage=1 00:30:56.373 --rc genhtml_function_coverage=1 00:30:56.373 --rc genhtml_legend=1 00:30:56.373 --rc geninfo_all_blocks=1 00:30:56.373 --rc geninfo_unexecuted_blocks=1 00:30:56.373 00:30:56.373 ' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:56.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.373 --rc genhtml_branch_coverage=1 00:30:56.373 --rc genhtml_function_coverage=1 00:30:56.373 --rc genhtml_legend=1 00:30:56.373 --rc geninfo_all_blocks=1 00:30:56.373 --rc geninfo_unexecuted_blocks=1 00:30:56.373 00:30:56.373 ' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.373 12:45:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:58.275 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:58.275 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:58.275 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:58.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.275 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:58.533 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:58.533 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:58.533 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:58.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:58.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:30:58.533 00:30:58.533 --- 10.0.0.2 ping statistics --- 00:30:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.533 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:30:58.533 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:58.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:58.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:30:58.533 00:30:58.533 --- 10.0.0.1 ping statistics --- 00:30:58.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:58.534 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=747958 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 747958 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 747958 ']' 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:58.534 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.534 [2024-11-05 12:45:27.602621] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:30:58.534 [2024-11-05 12:45:27.602694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.534 [2024-11-05 12:45:27.677440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.534 [2024-11-05 12:45:27.725987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.534 [2024-11-05 12:45:27.726048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.534 [2024-11-05 12:45:27.726077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.534 [2024-11-05 12:45:27.726089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.534 [2024-11-05 12:45:27.726099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.534 [2024-11-05 12:45:27.727712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.534 [2024-11-05 12:45:27.727774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.534 [2024-11-05 12:45:27.727820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.534 [2024-11-05 12:45:27.727823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.791 [2024-11-05 12:45:27.874084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.791 Malloc0 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.791 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.791 [2024-11-05 12:45:27.937278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.792 [ 00:30:58.792 { 00:30:58.792 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:58.792 "subtype": "Discovery", 00:30:58.792 "listen_addresses": [], 00:30:58.792 "allow_any_host": true, 00:30:58.792 "hosts": [] 00:30:58.792 }, 00:30:58.792 { 00:30:58.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.792 "subtype": "NVMe", 00:30:58.792 "listen_addresses": [ 00:30:58.792 { 00:30:58.792 "trtype": "TCP", 00:30:58.792 "adrfam": "IPv4", 00:30:58.792 "traddr": "10.0.0.2", 00:30:58.792 "trsvcid": "4420" 00:30:58.792 } 00:30:58.792 ], 00:30:58.792 "allow_any_host": true, 00:30:58.792 "hosts": [], 00:30:58.792 "serial_number": "SPDK00000000000001", 00:30:58.792 "model_number": "SPDK bdev Controller", 00:30:58.792 "max_namespaces": 2, 00:30:58.792 "min_cntlid": 1, 00:30:58.792 "max_cntlid": 65519, 00:30:58.792 "namespaces": [ 00:30:58.792 { 00:30:58.792 "nsid": 1, 00:30:58.792 "bdev_name": "Malloc0", 00:30:58.792 "name": "Malloc0", 00:30:58.792 "nguid": "87C6D0C2A34C4905AA90F6BF1ED0D8F6", 00:30:58.792 "uuid": "87c6d0c2-a34c-4905-aa90-f6bf1ed0d8f6" 00:30:58.792 } 00:30:58.792 ] 00:30:58.792 } 00:30:58.792 ] 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=748103 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:30:58.792 12:45:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.049 Malloc1 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.049 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.049 [ 00:30:59.049 { 00:30:59.049 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:59.049 "subtype": "Discovery", 00:30:59.049 "listen_addresses": [], 00:30:59.049 "allow_any_host": true, 00:30:59.049 "hosts": [] 00:30:59.049 }, 00:30:59.049 { 00:30:59.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:59.049 "subtype": "NVMe", 00:30:59.049 "listen_addresses": [ 00:30:59.049 { 00:30:59.049 "trtype": "TCP", 00:30:59.049 "adrfam": "IPv4", 00:30:59.049 "traddr": "10.0.0.2", 00:30:59.049 "trsvcid": "4420" 00:30:59.050 } 00:30:59.050 ], 00:30:59.050 "allow_any_host": true, 00:30:59.050 "hosts": [], 00:30:59.050 "serial_number": "SPDK00000000000001", 00:30:59.050 "model_number": "SPDK bdev Controller", 00:30:59.050 "max_namespaces": 2, 00:30:59.050 "min_cntlid": 1, 00:30:59.050 "max_cntlid": 65519, 00:30:59.050 "namespaces": [ 00:30:59.050 { 00:30:59.050 "nsid": 1, 00:30:59.050 "bdev_name": "Malloc0", 00:30:59.050 "name": "Malloc0", 00:30:59.050 "nguid": "87C6D0C2A34C4905AA90F6BF1ED0D8F6", 00:30:59.050 "uuid": "87c6d0c2-a34c-4905-aa90-f6bf1ed0d8f6" 00:30:59.050 }, 00:30:59.050 { 00:30:59.050 "nsid": 2, 00:30:59.050 "bdev_name": "Malloc1", 00:30:59.050 "name": "Malloc1", 00:30:59.050 "nguid": "55BB21B5715B42CF94047E898C9E42BA", 00:30:59.050 "uuid": "55bb21b5-715b-42cf-9404-7e898c9e42ba" 00:30:59.050 } 00:30:59.050 ] 00:30:59.050 } 00:30:59.050 ] 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 748103 00:30:59.050 Asynchronous Event Request test 00:30:59.050 Attaching to 10.0.0.2 00:30:59.050 Attached to 10.0.0.2 00:30:59.050 Registering asynchronous event callbacks... 00:30:59.050 Starting namespace attribute notice tests for all controllers... 00:30:59.050 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:59.050 aer_cb - Changed Namespace 00:30:59.050 Cleaning up... 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.050 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:59.307 rmmod nvme_tcp 00:30:59.307 rmmod nvme_fabrics 00:30:59.307 rmmod nvme_keyring 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 747958 ']' 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 747958 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 747958 ']' 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 747958 00:30:59.307 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 747958 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 747958' 00:30:59.308 killing process with pid 747958 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 747958 00:30:59.308 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 747958 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.566 12:45:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:01.478 00:31:01.478 real 0m5.525s 00:31:01.478 user 0m4.369s 00:31:01.478 sys 0m2.056s 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:01.478 ************************************ 00:31:01.478 END TEST nvmf_aer 00:31:01.478 ************************************ 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:01.478 12:45:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:01.737 ************************************ 00:31:01.737 START TEST nvmf_async_init 00:31:01.737 ************************************ 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:01.737 * Looking for test storage... 00:31:01.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.737 --rc genhtml_branch_coverage=1 00:31:01.737 --rc genhtml_function_coverage=1 00:31:01.737 --rc genhtml_legend=1 00:31:01.737 --rc geninfo_all_blocks=1 00:31:01.737 --rc geninfo_unexecuted_blocks=1 00:31:01.737 00:31:01.737 ' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.737 --rc genhtml_branch_coverage=1 00:31:01.737 --rc genhtml_function_coverage=1 00:31:01.737 --rc genhtml_legend=1 00:31:01.737 --rc geninfo_all_blocks=1 00:31:01.737 --rc geninfo_unexecuted_blocks=1 00:31:01.737 00:31:01.737 ' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.737 --rc genhtml_branch_coverage=1 00:31:01.737 --rc genhtml_function_coverage=1 00:31:01.737 --rc genhtml_legend=1 00:31:01.737 --rc geninfo_all_blocks=1 00:31:01.737 --rc geninfo_unexecuted_blocks=1 00:31:01.737 00:31:01.737 ' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:01.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.737 --rc genhtml_branch_coverage=1 00:31:01.737 --rc genhtml_function_coverage=1 00:31:01.737 --rc genhtml_legend=1 00:31:01.737 --rc geninfo_all_blocks=1 00:31:01.737 --rc geninfo_unexecuted_blocks=1 00:31:01.737 00:31:01.737 ' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.737 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:01.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8ee8c714548c4129b0338ce9c0670cf4 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:01.738 12:45:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:04.332 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:04.332 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:04.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:04.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.332 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:31:04.333 00:31:04.333 --- 10.0.0.2 ping statistics --- 00:31:04.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.333 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:31:04.333 00:31:04.333 --- 10.0.0.1 ping statistics --- 00:31:04.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.333 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=750048 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 750048 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 750048 ']' 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 [2024-11-05 12:45:33.269935] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:31:04.333 [2024-11-05 12:45:33.270007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.333 [2024-11-05 12:45:33.341352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.333 [2024-11-05 12:45:33.387588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.333 [2024-11-05 12:45:33.387659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.333 [2024-11-05 12:45:33.387672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.333 [2024-11-05 12:45:33.387683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.333 [2024-11-05 12:45:33.387692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.333 [2024-11-05 12:45:33.388348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 [2024-11-05 12:45:33.525251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 null0 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8ee8c714548c4129b0338ce9c0670cf4 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.333 [2024-11-05 12:45:33.565515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.333 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.593 nvme0n1 00:31:04.593 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.593 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:04.593 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.593 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.593 [ 00:31:04.593 { 00:31:04.593 "name": "nvme0n1", 00:31:04.593 "aliases": [ 00:31:04.593 "8ee8c714-548c-4129-b033-8ce9c0670cf4" 00:31:04.593 ], 00:31:04.593 "product_name": "NVMe disk", 00:31:04.593 "block_size": 512, 00:31:04.593 "num_blocks": 2097152, 00:31:04.593 "uuid": "8ee8c714-548c-4129-b033-8ce9c0670cf4", 00:31:04.593 "numa_id": 0, 00:31:04.593 "assigned_rate_limits": { 00:31:04.593 "rw_ios_per_sec": 0, 00:31:04.593 "rw_mbytes_per_sec": 0, 00:31:04.593 "r_mbytes_per_sec": 0, 00:31:04.593 "w_mbytes_per_sec": 0 00:31:04.593 }, 00:31:04.593 "claimed": false, 00:31:04.593 "zoned": false, 00:31:04.593 "supported_io_types": { 00:31:04.593 "read": true, 00:31:04.593 "write": true, 00:31:04.593 "unmap": false, 00:31:04.593 "flush": true, 00:31:04.593 "reset": true, 00:31:04.593 "nvme_admin": true, 00:31:04.593 "nvme_io": true, 00:31:04.593 "nvme_io_md": false, 00:31:04.593 "write_zeroes": true, 00:31:04.593 "zcopy": false, 00:31:04.593 "get_zone_info": false, 00:31:04.593 "zone_management": false, 00:31:04.593 "zone_append": false, 00:31:04.593 "compare": true, 00:31:04.593 "compare_and_write": true, 00:31:04.593 "abort": true, 00:31:04.593 "seek_hole": false, 00:31:04.593 "seek_data": false, 00:31:04.593 "copy": true, 00:31:04.593 "nvme_iov_md": false 00:31:04.593 }, 00:31:04.593 "memory_domains": [ 00:31:04.593 { 00:31:04.593 "dma_device_id": "system", 00:31:04.593 "dma_device_type": 1 00:31:04.593 } 00:31:04.593 ], 00:31:04.593 "driver_specific": { 00:31:04.593 "nvme": [ 00:31:04.593 { 00:31:04.593 "trid": { 00:31:04.593 "trtype": "TCP", 00:31:04.593 "adrfam": "IPv4", 00:31:04.593 "traddr": "10.0.0.2", 00:31:04.593 "trsvcid": "4420", 00:31:04.594 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:04.594 }, 00:31:04.594 "ctrlr_data": { 00:31:04.594 "cntlid": 1, 00:31:04.594 "vendor_id": "0x8086", 00:31:04.594 "model_number": "SPDK bdev Controller", 00:31:04.594 "serial_number": "00000000000000000000", 00:31:04.594 "firmware_revision": "25.01", 00:31:04.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.594 "oacs": { 00:31:04.594 "security": 0, 00:31:04.594 "format": 0, 00:31:04.594 "firmware": 0, 00:31:04.594 "ns_manage": 0 00:31:04.594 }, 00:31:04.594 "multi_ctrlr": true, 00:31:04.594 "ana_reporting": false 00:31:04.594 }, 00:31:04.594 "vs": { 00:31:04.594 "nvme_version": "1.3" 00:31:04.594 }, 00:31:04.594 "ns_data": { 00:31:04.594 "id": 1, 00:31:04.594 "can_share": true 00:31:04.594 } 00:31:04.594 } 00:31:04.594 ], 00:31:04.594 "mp_policy": "active_passive" 00:31:04.594 } 00:31:04.594 } 00:31:04.594 ] 00:31:04.594 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.594 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:04.594 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.594 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.594 [2024-11-05 12:45:33.814000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:04.594 [2024-11-05 12:45:33.814106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d44a0 (9): Bad file descriptor 00:31:04.854 [2024-11-05 12:45:33.945986] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.854 [ 00:31:04.854 { 00:31:04.854 "name": "nvme0n1", 00:31:04.854 "aliases": [ 00:31:04.854 "8ee8c714-548c-4129-b033-8ce9c0670cf4" 00:31:04.854 ], 00:31:04.854 "product_name": "NVMe disk", 00:31:04.854 "block_size": 512, 00:31:04.854 "num_blocks": 2097152, 00:31:04.854 "uuid": "8ee8c714-548c-4129-b033-8ce9c0670cf4", 00:31:04.854 "numa_id": 0, 00:31:04.854 "assigned_rate_limits": { 00:31:04.854 "rw_ios_per_sec": 0, 00:31:04.854 "rw_mbytes_per_sec": 0, 00:31:04.854 "r_mbytes_per_sec": 0, 00:31:04.854 "w_mbytes_per_sec": 0 00:31:04.854 }, 00:31:04.854 "claimed": false, 00:31:04.854 "zoned": false, 00:31:04.854 "supported_io_types": { 00:31:04.854 "read": true, 00:31:04.854 "write": true, 00:31:04.854 "unmap": false, 00:31:04.854 "flush": true, 00:31:04.854 "reset": true, 00:31:04.854 "nvme_admin": true, 00:31:04.854 "nvme_io": true, 00:31:04.854 "nvme_io_md": false, 00:31:04.854 "write_zeroes": true, 00:31:04.854 "zcopy": false, 00:31:04.854 "get_zone_info": false, 00:31:04.854 "zone_management": false, 00:31:04.854 "zone_append": false, 00:31:04.854 "compare": true, 00:31:04.854 "compare_and_write": true, 00:31:04.854 "abort": true, 00:31:04.854 "seek_hole": false, 00:31:04.854 "seek_data": false, 00:31:04.854 "copy": true, 00:31:04.854 "nvme_iov_md": false 00:31:04.854 }, 00:31:04.854 "memory_domains": [ 00:31:04.854 { 00:31:04.854 "dma_device_id": "system", 00:31:04.854 "dma_device_type": 1 00:31:04.854 } 00:31:04.854 ], 00:31:04.854 "driver_specific": { 00:31:04.854 "nvme": [ 00:31:04.854 { 00:31:04.854 "trid": { 00:31:04.854 "trtype": "TCP", 00:31:04.854 "adrfam": "IPv4", 00:31:04.854 "traddr": "10.0.0.2", 00:31:04.854 "trsvcid": "4420", 00:31:04.854 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:04.854 }, 00:31:04.854 "ctrlr_data": { 00:31:04.854 "cntlid": 2, 00:31:04.854 "vendor_id": "0x8086", 00:31:04.854 "model_number": "SPDK bdev Controller", 00:31:04.854 "serial_number": "00000000000000000000", 00:31:04.854 "firmware_revision": "25.01", 00:31:04.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:04.854 "oacs": { 00:31:04.854 "security": 0, 00:31:04.854 "format": 0, 00:31:04.854 "firmware": 0, 00:31:04.854 "ns_manage": 0 00:31:04.854 }, 00:31:04.854 "multi_ctrlr": true, 00:31:04.854 "ana_reporting": false 00:31:04.854 }, 00:31:04.854 "vs": { 00:31:04.854 "nvme_version": "1.3" 00:31:04.854 }, 00:31:04.854 "ns_data": { 00:31:04.854 "id": 1, 00:31:04.854 "can_share": true 00:31:04.854 } 00:31:04.854 } 00:31:04.854 ], 00:31:04.854 "mp_policy": "active_passive" 00:31:04.854 } 00:31:04.854 } 00:31:04.854 ] 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.14ZbSxnHO1 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.14ZbSxnHO1 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.14ZbSxnHO1 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.854 12:45:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.855 [2024-11-05 12:45:33.998636] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:04.855 [2024-11-05 12:45:33.998756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.855 [2024-11-05 12:45:34.014682] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:04.855 nvme0n1 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.855 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:04.855 [ 00:31:04.855 { 00:31:04.855 "name": "nvme0n1", 00:31:04.855 "aliases": [ 00:31:04.855 "8ee8c714-548c-4129-b033-8ce9c0670cf4" 00:31:04.855 ], 00:31:04.855 "product_name": "NVMe disk", 00:31:04.855 "block_size": 512, 00:31:04.855 "num_blocks": 2097152, 00:31:04.855 "uuid": "8ee8c714-548c-4129-b033-8ce9c0670cf4", 00:31:04.855 "numa_id": 0, 00:31:04.855 "assigned_rate_limits": { 00:31:04.855 "rw_ios_per_sec": 0, 00:31:04.855 "rw_mbytes_per_sec": 0, 00:31:04.855 "r_mbytes_per_sec": 0, 00:31:04.855 "w_mbytes_per_sec": 0 00:31:04.855 }, 00:31:04.855 "claimed": false, 00:31:04.855 "zoned": false, 00:31:04.855 "supported_io_types": { 00:31:04.855 "read": true, 00:31:04.855 "write": true, 00:31:04.855 "unmap": false, 00:31:04.855 "flush": true, 00:31:04.855 "reset": true, 00:31:04.855 "nvme_admin": true, 00:31:04.855 "nvme_io": true, 00:31:04.855 "nvme_io_md": false, 00:31:04.855 "write_zeroes": true, 00:31:04.855 "zcopy": false, 00:31:04.855 "get_zone_info": false, 00:31:04.855 "zone_management": false, 00:31:04.855 "zone_append": false, 00:31:04.855 "compare": true, 00:31:04.855 "compare_and_write": true, 00:31:04.855 "abort": true, 00:31:04.855 "seek_hole": false, 00:31:04.855 "seek_data": false, 00:31:04.855 "copy": true, 00:31:04.855 "nvme_iov_md": false 00:31:04.855 }, 00:31:04.855 "memory_domains": [ 00:31:04.855 { 00:31:04.855 "dma_device_id": "system", 00:31:05.115 "dma_device_type": 1 00:31:05.115 } 00:31:05.115 ], 00:31:05.115 "driver_specific": { 00:31:05.115 "nvme": [ 00:31:05.115 { 00:31:05.115 "trid": { 00:31:05.115 "trtype": "TCP", 00:31:05.115 "adrfam": "IPv4", 00:31:05.115 "traddr": "10.0.0.2", 00:31:05.115 "trsvcid": "4421", 00:31:05.115 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:05.115 }, 00:31:05.115 "ctrlr_data": { 00:31:05.115 "cntlid": 3, 00:31:05.115 "vendor_id": "0x8086", 00:31:05.115 "model_number": "SPDK bdev Controller", 00:31:05.115 "serial_number": "00000000000000000000", 00:31:05.115 "firmware_revision": "25.01", 00:31:05.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.115 "oacs": { 00:31:05.115 "security": 0, 00:31:05.115 "format": 0, 00:31:05.115 "firmware": 0, 00:31:05.115 "ns_manage": 0 00:31:05.115 }, 00:31:05.115 "multi_ctrlr": true, 00:31:05.115 "ana_reporting": false 00:31:05.115 }, 00:31:05.115 "vs": { 00:31:05.115 "nvme_version": "1.3" 00:31:05.115 }, 00:31:05.115 "ns_data": { 00:31:05.115 "id": 1, 00:31:05.115 "can_share": true 00:31:05.115 } 00:31:05.115 } 00:31:05.115 ], 00:31:05.115 "mp_policy": "active_passive" 00:31:05.115 } 00:31:05.115 } 00:31:05.115 ] 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.14ZbSxnHO1 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.115 rmmod nvme_tcp 00:31:05.115 rmmod nvme_fabrics 00:31:05.115 rmmod nvme_keyring 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 750048 ']' 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 750048 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 750048 ']' 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 750048 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 750048 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 750048' 00:31:05.115 killing process with pid 750048 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 750048 00:31:05.115 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 750048 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.375 12:45:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.284 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:07.284 00:31:07.284 real 0m5.720s 00:31:07.284 user 0m2.113s 00:31:07.284 sys 0m2.037s 00:31:07.284 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:07.284 12:45:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:07.284 ************************************ 00:31:07.284 END TEST nvmf_async_init 00:31:07.284 ************************************ 00:31:07.284 12:45:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:07.285 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:07.285 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:07.285 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.285 ************************************ 00:31:07.285 START TEST dma 00:31:07.285 ************************************ 00:31:07.285 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:07.544 * Looking for test storage... 00:31:07.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:07.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.544 --rc genhtml_branch_coverage=1 00:31:07.544 --rc genhtml_function_coverage=1 00:31:07.544 --rc genhtml_legend=1 00:31:07.544 --rc geninfo_all_blocks=1 00:31:07.544 --rc geninfo_unexecuted_blocks=1 00:31:07.544 00:31:07.544 ' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:07.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.544 --rc genhtml_branch_coverage=1 00:31:07.544 --rc genhtml_function_coverage=1 00:31:07.544 --rc genhtml_legend=1 00:31:07.544 --rc geninfo_all_blocks=1 00:31:07.544 --rc geninfo_unexecuted_blocks=1 00:31:07.544 00:31:07.544 ' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:07.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.544 --rc genhtml_branch_coverage=1 00:31:07.544 --rc genhtml_function_coverage=1 00:31:07.544 --rc genhtml_legend=1 00:31:07.544 --rc geninfo_all_blocks=1 00:31:07.544 --rc geninfo_unexecuted_blocks=1 00:31:07.544 00:31:07.544 ' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:07.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.544 --rc genhtml_branch_coverage=1 00:31:07.544 --rc genhtml_function_coverage=1 00:31:07.544 --rc genhtml_legend=1 00:31:07.544 --rc geninfo_all_blocks=1 00:31:07.544 --rc geninfo_unexecuted_blocks=1 00:31:07.544 00:31:07.544 ' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.544 12:45:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:07.545 00:31:07.545 real 0m0.169s 00:31:07.545 user 0m0.127s 00:31:07.545 sys 0m0.051s 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:07.545 ************************************ 00:31:07.545 END TEST dma 00:31:07.545 ************************************ 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:07.545 ************************************ 00:31:07.545 START TEST nvmf_identify 00:31:07.545 ************************************ 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:07.545 * Looking for test storage... 00:31:07.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:31:07.545 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:07.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.804 --rc genhtml_branch_coverage=1 00:31:07.804 --rc genhtml_function_coverage=1 00:31:07.804 --rc genhtml_legend=1 00:31:07.804 --rc geninfo_all_blocks=1 00:31:07.804 --rc geninfo_unexecuted_blocks=1 00:31:07.804 00:31:07.804 ' 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:07.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.804 --rc genhtml_branch_coverage=1 00:31:07.804 --rc genhtml_function_coverage=1 00:31:07.804 --rc genhtml_legend=1 00:31:07.804 --rc geninfo_all_blocks=1 00:31:07.804 --rc geninfo_unexecuted_blocks=1 00:31:07.804 00:31:07.804 ' 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:07.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.804 --rc genhtml_branch_coverage=1 00:31:07.804 --rc genhtml_function_coverage=1 00:31:07.804 --rc genhtml_legend=1 00:31:07.804 --rc geninfo_all_blocks=1 00:31:07.804 --rc geninfo_unexecuted_blocks=1 00:31:07.804 00:31:07.804 ' 00:31:07.804 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:07.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.804 --rc genhtml_branch_coverage=1 00:31:07.804 --rc genhtml_function_coverage=1 00:31:07.805 --rc genhtml_legend=1 00:31:07.805 --rc geninfo_all_blocks=1 00:31:07.805 --rc geninfo_unexecuted_blocks=1 00:31:07.805 00:31:07.805 ' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.805 12:45:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:10.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:10.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:10.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:10.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:10.337 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:31:10.338 00:31:10.338 --- 10.0.0.2 ping statistics --- 00:31:10.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.338 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:31:10.338 00:31:10.338 --- 10.0.0.1 ping statistics --- 00:31:10.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.338 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=752306 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 752306 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 752306 ']' 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.338 [2024-11-05 12:45:39.290237] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:31:10.338 [2024-11-05 12:45:39.290313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.338 [2024-11-05 12:45:39.365495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.338 [2024-11-05 12:45:39.413272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.338 [2024-11-05 12:45:39.413327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.338 [2024-11-05 12:45:39.413355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.338 [2024-11-05 12:45:39.413371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.338 [2024-11-05 12:45:39.413380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.338 [2024-11-05 12:45:39.414974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.338 [2024-11-05 12:45:39.415037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.338 [2024-11-05 12:45:39.415089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.338 [2024-11-05 12:45:39.415092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.338 [2024-11-05 12:45:39.542349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.338 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.615 Malloc0 00:31:10.615 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.615 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:10.615 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 [2024-11-05 12:45:39.624740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.616 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.616 [ 00:31:10.616 { 00:31:10.616 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:10.616 "subtype": "Discovery", 00:31:10.616 "listen_addresses": [ 00:31:10.616 { 00:31:10.616 "trtype": "TCP", 00:31:10.616 "adrfam": "IPv4", 00:31:10.616 "traddr": "10.0.0.2", 00:31:10.616 "trsvcid": "4420" 00:31:10.616 } 00:31:10.616 ], 00:31:10.617 "allow_any_host": true, 00:31:10.617 "hosts": [] 00:31:10.617 }, 00:31:10.617 { 00:31:10.617 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.617 "subtype": "NVMe", 00:31:10.617 "listen_addresses": [ 00:31:10.617 { 00:31:10.617 "trtype": "TCP", 00:31:10.617 "adrfam": "IPv4", 00:31:10.617 "traddr": "10.0.0.2", 00:31:10.617 "trsvcid": "4420" 00:31:10.617 } 00:31:10.617 ], 00:31:10.617 "allow_any_host": true, 00:31:10.617 "hosts": [], 00:31:10.617 "serial_number": "SPDK00000000000001", 00:31:10.617 "model_number": "SPDK bdev Controller", 00:31:10.617 "max_namespaces": 32, 00:31:10.617 "min_cntlid": 1, 00:31:10.617 "max_cntlid": 65519, 00:31:10.617 "namespaces": [ 00:31:10.617 { 00:31:10.617 "nsid": 1, 00:31:10.617 "bdev_name": "Malloc0", 00:31:10.617 "name": "Malloc0", 00:31:10.617 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:10.617 "eui64": "ABCDEF0123456789", 00:31:10.617 "uuid": "e3317978-db1c-4fd8-a8ae-b987962c1bd1" 00:31:10.617 } 00:31:10.617 ] 00:31:10.617 } 00:31:10.617 ] 00:31:10.617 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.617 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:10.617 [2024-11-05 12:45:39.662749] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:31:10.617 [2024-11-05 12:45:39.662788] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752331 ] 00:31:10.617 [2024-11-05 12:45:39.715037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:10.617 [2024-11-05 12:45:39.715106] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:10.618 [2024-11-05 12:45:39.715118] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:10.618 [2024-11-05 12:45:39.715134] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:10.618 [2024-11-05 12:45:39.715162] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:10.618 [2024-11-05 12:45:39.715948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:10.618 [2024-11-05 12:45:39.716002] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1653d80 0 00:31:10.618 [2024-11-05 12:45:39.721875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:10.618 [2024-11-05 12:45:39.721897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:10.618 [2024-11-05 12:45:39.721907] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:10.618 [2024-11-05 12:45:39.721913] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:10.618 [2024-11-05 12:45:39.721954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.618 [2024-11-05 12:45:39.721968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.618 [2024-11-05 12:45:39.721975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.618 [2024-11-05 12:45:39.721993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:10.618 [2024-11-05 12:45:39.722020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.618 [2024-11-05 12:45:39.729876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.618 [2024-11-05 12:45:39.729894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.618 [2024-11-05 12:45:39.729902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.618 [2024-11-05 12:45:39.729910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.618 [2024-11-05 12:45:39.729929] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:10.618 [2024-11-05 12:45:39.729941] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:10.618 [2024-11-05 12:45:39.729956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:10.618 [2024-11-05 12:45:39.729977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.618 [2024-11-05 12:45:39.729986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.618 [2024-11-05 12:45:39.729992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.619 [2024-11-05 12:45:39.730003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.619 [2024-11-05 12:45:39.730027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.619 [2024-11-05 12:45:39.730166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.619 [2024-11-05 12:45:39.730180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.619 [2024-11-05 12:45:39.730188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.619 [2024-11-05 12:45:39.730194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.619 [2024-11-05 12:45:39.730203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:10.619 [2024-11-05 12:45:39.730216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:10.619 [2024-11-05 12:45:39.730229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.619 [2024-11-05 12:45:39.730236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.619 [2024-11-05 12:45:39.730243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.619 [2024-11-05 12:45:39.730253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.619 [2024-11-05 12:45:39.730274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.619 [2024-11-05 12:45:39.730356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.619 [2024-11-05 12:45:39.730371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.619 [2024-11-05 12:45:39.730378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.619 [2024-11-05 12:45:39.730384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.619 [2024-11-05 12:45:39.730393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:10.619 [2024-11-05 12:45:39.730407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:10.619 [2024-11-05 12:45:39.730419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.619 [2024-11-05 12:45:39.730427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.619 [2024-11-05 12:45:39.730433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.619 [2024-11-05 12:45:39.730443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.619 [2024-11-05 12:45:39.730464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.619 [2024-11-05 12:45:39.730559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.619 [2024-11-05 12:45:39.730571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.619 [2024-11-05 12:45:39.730578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.730585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.620 [2024-11-05 12:45:39.730594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:10.620 [2024-11-05 12:45:39.730617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.730627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.730634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.730644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.620 [2024-11-05 12:45:39.730665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.620 [2024-11-05 12:45:39.730789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.620 [2024-11-05 12:45:39.730802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.620 [2024-11-05 12:45:39.730809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.730816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.620 [2024-11-05 12:45:39.730824] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:10.620 [2024-11-05 12:45:39.730832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:10.620 [2024-11-05 12:45:39.730845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:10.620 [2024-11-05 12:45:39.730955] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:10.620 [2024-11-05 12:45:39.730966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:10.620 [2024-11-05 12:45:39.730980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.730988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.730994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.731004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.620 [2024-11-05 12:45:39.731026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.620 [2024-11-05 12:45:39.731163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.620 [2024-11-05 12:45:39.731177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.620 [2024-11-05 12:45:39.731184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.620 [2024-11-05 12:45:39.731199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:10.620 [2024-11-05 12:45:39.731216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.731241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.620 [2024-11-05 12:45:39.731262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.620 [2024-11-05 12:45:39.731337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.620 [2024-11-05 12:45:39.731351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.620 [2024-11-05 12:45:39.731358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.620 [2024-11-05 12:45:39.731377] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:10.620 [2024-11-05 12:45:39.731387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:10.620 [2024-11-05 12:45:39.731400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:10.620 [2024-11-05 12:45:39.731415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:10.620 [2024-11-05 12:45:39.731430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.731448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.620 [2024-11-05 12:45:39.731469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.620 [2024-11-05 12:45:39.731604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.620 [2024-11-05 12:45:39.731617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.620 [2024-11-05 12:45:39.731624] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731631] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653d80): datao=0, datal=4096, cccid=0 00:31:10.620 [2024-11-05 12:45:39.731639] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16bf480) on tqpair(0x1653d80): expected_datao=0, payload_size=4096 00:31:10.620 [2024-11-05 12:45:39.731646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731657] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731666] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.620 [2024-11-05 12:45:39.731688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.620 [2024-11-05 12:45:39.731695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.620 [2024-11-05 12:45:39.731713] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:10.620 [2024-11-05 12:45:39.731722] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:10.620 [2024-11-05 12:45:39.731729] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:10.620 [2024-11-05 12:45:39.731738] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:10.620 [2024-11-05 12:45:39.731750] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:10.620 [2024-11-05 12:45:39.731759] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:10.620 [2024-11-05 12:45:39.731774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:10.620 [2024-11-05 12:45:39.731786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.731810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:10.620 [2024-11-05 12:45:39.731836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.620 [2024-11-05 12:45:39.731961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.620 [2024-11-05 12:45:39.731975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.620 [2024-11-05 12:45:39.731982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.731989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.620 [2024-11-05 12:45:39.732005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.732030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.620 [2024-11-05 12:45:39.732040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.732062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.620 [2024-11-05 12:45:39.732071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.732092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.620 [2024-11-05 12:45:39.732102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.732123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.620 [2024-11-05 12:45:39.732131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:10.620 [2024-11-05 12:45:39.732166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:10.620 [2024-11-05 12:45:39.732178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.620 [2024-11-05 12:45:39.732185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653d80) 00:31:10.620 [2024-11-05 12:45:39.732195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.620 [2024-11-05 12:45:39.732217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf480, cid 0, qid 0 00:31:10.620 [2024-11-05 12:45:39.732244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf600, cid 1, qid 0 00:31:10.620 [2024-11-05 12:45:39.732252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf780, cid 2, qid 0 00:31:10.621 [2024-11-05 12:45:39.732260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.621 [2024-11-05 12:45:39.732268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bfa80, cid 4, qid 0 00:31:10.621 [2024-11-05 12:45:39.732444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.621 [2024-11-05 12:45:39.732458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.621 [2024-11-05 12:45:39.732465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.732476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bfa80) on tqpair=0x1653d80 00:31:10.621 [2024-11-05 12:45:39.732489] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:10.621 [2024-11-05 12:45:39.732499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:10.621 [2024-11-05 12:45:39.732517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.732527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653d80) 00:31:10.621 [2024-11-05 12:45:39.732552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.621 [2024-11-05 12:45:39.732573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bfa80, cid 4, qid 0 00:31:10.621 [2024-11-05 12:45:39.732729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.621 [2024-11-05 12:45:39.732744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.621 [2024-11-05 12:45:39.732751] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.732758] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653d80): datao=0, datal=4096, cccid=4 00:31:10.621 [2024-11-05 12:45:39.732766] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16bfa80) on tqpair(0x1653d80): expected_datao=0, payload_size=4096 00:31:10.621 [2024-11-05 12:45:39.732773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.732790] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.732799] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.773963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.621 [2024-11-05 12:45:39.773983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.621 [2024-11-05 12:45:39.773991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.773998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bfa80) on tqpair=0x1653d80 00:31:10.621 [2024-11-05 12:45:39.774017] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:10.621 [2024-11-05 12:45:39.774058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653d80) 00:31:10.621 [2024-11-05 12:45:39.774081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.621 [2024-11-05 12:45:39.774093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1653d80) 00:31:10.621 [2024-11-05 12:45:39.774116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.621 [2024-11-05 12:45:39.774144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bfa80, cid 4, qid 0 00:31:10.621 [2024-11-05 12:45:39.774156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bfc00, cid 5, qid 0 00:31:10.621 [2024-11-05 12:45:39.774338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.621 [2024-11-05 12:45:39.774351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.621 [2024-11-05 12:45:39.774359] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774365] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653d80): datao=0, datal=1024, cccid=4 00:31:10.621 [2024-11-05 12:45:39.774373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16bfa80) on tqpair(0x1653d80): expected_datao=0, payload_size=1024 00:31:10.621 [2024-11-05 12:45:39.774384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774395] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774403] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.621 [2024-11-05 12:45:39.774436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.621 [2024-11-05 12:45:39.774443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.774449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bfc00) on tqpair=0x1653d80 00:31:10.621 [2024-11-05 12:45:39.816875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.621 [2024-11-05 12:45:39.816893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.621 [2024-11-05 12:45:39.816900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.816907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bfa80) on tqpair=0x1653d80 00:31:10.621 [2024-11-05 12:45:39.816925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.816934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653d80) 00:31:10.621 [2024-11-05 12:45:39.816945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.621 [2024-11-05 12:45:39.816989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bfa80, cid 4, qid 0 00:31:10.621 [2024-11-05 12:45:39.817132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.621 [2024-11-05 12:45:39.817148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.621 [2024-11-05 12:45:39.817155] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817161] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653d80): datao=0, datal=3072, cccid=4 00:31:10.621 [2024-11-05 12:45:39.817169] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16bfa80) on tqpair(0x1653d80): expected_datao=0, payload_size=3072 00:31:10.621 [2024-11-05 12:45:39.817177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817187] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817195] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.621 [2024-11-05 12:45:39.817217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.621 [2024-11-05 12:45:39.817223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bfa80) on tqpair=0x1653d80 00:31:10.621 [2024-11-05 12:45:39.817245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1653d80) 00:31:10.621 [2024-11-05 12:45:39.817263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.621 [2024-11-05 12:45:39.817292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bfa80, cid 4, qid 0 00:31:10.621 [2024-11-05 12:45:39.817394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.621 [2024-11-05 12:45:39.817409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.621 [2024-11-05 12:45:39.817416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817422] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1653d80): datao=0, datal=8, cccid=4 00:31:10.621 [2024-11-05 12:45:39.817430] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16bfa80) on tqpair(0x1653d80): expected_datao=0, payload_size=8 00:31:10.621 [2024-11-05 12:45:39.817437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817452] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.621 [2024-11-05 12:45:39.817460] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.885 [2024-11-05 12:45:39.858006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.885 [2024-11-05 12:45:39.858025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.885 [2024-11-05 12:45:39.858033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.885 [2024-11-05 12:45:39.858040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bfa80) on tqpair=0x1653d80 00:31:10.885 ===================================================== 00:31:10.885 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:10.885 ===================================================== 00:31:10.885 Controller Capabilities/Features 00:31:10.885 ================================ 00:31:10.885 Vendor ID: 0000 00:31:10.885 Subsystem Vendor ID: 0000 00:31:10.885 Serial Number: .................... 00:31:10.885 Model Number: ........................................ 00:31:10.885 Firmware Version: 25.01 00:31:10.885 Recommended Arb Burst: 0 00:31:10.885 IEEE OUI Identifier: 00 00 00 00:31:10.885 Multi-path I/O 00:31:10.885 May have multiple subsystem ports: No 00:31:10.885 May have multiple controllers: No 00:31:10.885 Associated with SR-IOV VF: No 00:31:10.885 Max Data Transfer Size: 131072 00:31:10.885 Max Number of Namespaces: 0 00:31:10.885 Max Number of I/O Queues: 1024 00:31:10.885 NVMe Specification Version (VS): 1.3 00:31:10.885 NVMe Specification Version (Identify): 1.3 00:31:10.885 Maximum Queue Entries: 128 00:31:10.885 Contiguous Queues Required: Yes 00:31:10.885 Arbitration Mechanisms Supported 00:31:10.885 Weighted Round Robin: Not Supported 00:31:10.885 Vendor Specific: Not Supported 00:31:10.885 Reset Timeout: 15000 ms 00:31:10.885 Doorbell Stride: 4 bytes 00:31:10.885 NVM Subsystem Reset: Not Supported 00:31:10.885 Command Sets Supported 00:31:10.885 NVM Command Set: Supported 00:31:10.885 Boot Partition: Not Supported 00:31:10.885 Memory Page Size Minimum: 4096 bytes 00:31:10.885 Memory Page Size Maximum: 4096 bytes 00:31:10.885 Persistent Memory Region: Not Supported 00:31:10.885 Optional Asynchronous Events Supported 00:31:10.885 Namespace Attribute Notices: Not Supported 00:31:10.885 Firmware Activation Notices: Not Supported 00:31:10.885 ANA Change Notices: Not Supported 00:31:10.885 PLE Aggregate Log Change Notices: Not Supported 00:31:10.885 LBA Status Info Alert Notices: Not Supported 00:31:10.885 EGE Aggregate Log Change Notices: Not Supported 00:31:10.885 Normal NVM Subsystem Shutdown event: Not Supported 00:31:10.885 Zone Descriptor Change Notices: Not Supported 00:31:10.885 Discovery Log Change Notices: Supported 00:31:10.885 Controller Attributes 00:31:10.885 128-bit Host Identifier: Not Supported 00:31:10.885 Non-Operational Permissive Mode: Not Supported 00:31:10.885 NVM Sets: Not Supported 00:31:10.885 Read Recovery Levels: Not Supported 00:31:10.885 Endurance Groups: Not Supported 00:31:10.885 Predictable Latency Mode: Not Supported 00:31:10.885 Traffic Based Keep ALive: Not Supported 00:31:10.885 Namespace Granularity: Not Supported 00:31:10.885 SQ Associations: Not Supported 00:31:10.885 UUID List: Not Supported 00:31:10.885 Multi-Domain Subsystem: Not Supported 00:31:10.885 Fixed Capacity Management: Not Supported 00:31:10.885 Variable Capacity Management: Not Supported 00:31:10.885 Delete Endurance Group: Not Supported 00:31:10.885 Delete NVM Set: Not Supported 00:31:10.885 Extended LBA Formats Supported: Not Supported 00:31:10.885 Flexible Data Placement Supported: Not Supported 00:31:10.885 00:31:10.885 Controller Memory Buffer Support 00:31:10.885 ================================ 00:31:10.885 Supported: No 00:31:10.885 00:31:10.885 Persistent Memory Region Support 00:31:10.885 ================================ 00:31:10.885 Supported: No 00:31:10.885 00:31:10.885 Admin Command Set Attributes 00:31:10.885 ============================ 00:31:10.885 Security Send/Receive: Not Supported 00:31:10.885 Format NVM: Not Supported 00:31:10.885 Firmware Activate/Download: Not Supported 00:31:10.885 Namespace Management: Not Supported 00:31:10.885 Device Self-Test: Not Supported 00:31:10.885 Directives: Not Supported 00:31:10.885 NVMe-MI: Not Supported 00:31:10.885 Virtualization Management: Not Supported 00:31:10.885 Doorbell Buffer Config: Not Supported 00:31:10.885 Get LBA Status Capability: Not Supported 00:31:10.885 Command & Feature Lockdown Capability: Not Supported 00:31:10.885 Abort Command Limit: 1 00:31:10.885 Async Event Request Limit: 4 00:31:10.885 Number of Firmware Slots: N/A 00:31:10.885 Firmware Slot 1 Read-Only: N/A 00:31:10.885 Firmware Activation Without Reset: N/A 00:31:10.885 Multiple Update Detection Support: N/A 00:31:10.885 Firmware Update Granularity: No Information Provided 00:31:10.885 Per-Namespace SMART Log: No 00:31:10.885 Asymmetric Namespace Access Log Page: Not Supported 00:31:10.885 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:10.885 Command Effects Log Page: Not Supported 00:31:10.885 Get Log Page Extended Data: Supported 00:31:10.885 Telemetry Log Pages: Not Supported 00:31:10.885 Persistent Event Log Pages: Not Supported 00:31:10.885 Supported Log Pages Log Page: May Support 00:31:10.885 Commands Supported & Effects Log Page: Not Supported 00:31:10.885 Feature Identifiers & Effects Log Page:May Support 00:31:10.885 NVMe-MI Commands & Effects Log Page: May Support 00:31:10.885 Data Area 4 for Telemetry Log: Not Supported 00:31:10.885 Error Log Page Entries Supported: 128 00:31:10.885 Keep Alive: Not Supported 00:31:10.885 00:31:10.885 NVM Command Set Attributes 00:31:10.885 ========================== 00:31:10.885 Submission Queue Entry Size 00:31:10.885 Max: 1 00:31:10.885 Min: 1 00:31:10.885 Completion Queue Entry Size 00:31:10.885 Max: 1 00:31:10.885 Min: 1 00:31:10.885 Number of Namespaces: 0 00:31:10.885 Compare Command: Not Supported 00:31:10.885 Write Uncorrectable Command: Not Supported 00:31:10.885 Dataset Management Command: Not Supported 00:31:10.885 Write Zeroes Command: Not Supported 00:31:10.885 Set Features Save Field: Not Supported 00:31:10.885 Reservations: Not Supported 00:31:10.885 Timestamp: Not Supported 00:31:10.885 Copy: Not Supported 00:31:10.885 Volatile Write Cache: Not Present 00:31:10.885 Atomic Write Unit (Normal): 1 00:31:10.885 Atomic Write Unit (PFail): 1 00:31:10.885 Atomic Compare & Write Unit: 1 00:31:10.885 Fused Compare & Write: Supported 00:31:10.885 Scatter-Gather List 00:31:10.885 SGL Command Set: Supported 00:31:10.885 SGL Keyed: Supported 00:31:10.885 SGL Bit Bucket Descriptor: Not Supported 00:31:10.885 SGL Metadata Pointer: Not Supported 00:31:10.885 Oversized SGL: Not Supported 00:31:10.885 SGL Metadata Address: Not Supported 00:31:10.885 SGL Offset: Supported 00:31:10.885 Transport SGL Data Block: Not Supported 00:31:10.885 Replay Protected Memory Block: Not Supported 00:31:10.885 00:31:10.885 Firmware Slot Information 00:31:10.885 ========================= 00:31:10.885 Active slot: 0 00:31:10.885 00:31:10.885 00:31:10.885 Error Log 00:31:10.885 ========= 00:31:10.885 00:31:10.885 Active Namespaces 00:31:10.885 ================= 00:31:10.885 Discovery Log Page 00:31:10.885 ================== 00:31:10.885 Generation Counter: 2 00:31:10.885 Number of Records: 2 00:31:10.885 Record Format: 0 00:31:10.885 00:31:10.885 Discovery Log Entry 0 00:31:10.885 ---------------------- 00:31:10.885 Transport Type: 3 (TCP) 00:31:10.885 Address Family: 1 (IPv4) 00:31:10.885 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:10.885 Entry Flags: 00:31:10.885 Duplicate Returned Information: 1 00:31:10.885 Explicit Persistent Connection Support for Discovery: 1 00:31:10.885 Transport Requirements: 00:31:10.885 Secure Channel: Not Required 00:31:10.885 Port ID: 0 (0x0000) 00:31:10.885 Controller ID: 65535 (0xffff) 00:31:10.885 Admin Max SQ Size: 128 00:31:10.885 Transport Service Identifier: 4420 00:31:10.885 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:10.885 Transport Address: 10.0.0.2 00:31:10.885 Discovery Log Entry 1 00:31:10.885 ---------------------- 00:31:10.885 Transport Type: 3 (TCP) 00:31:10.885 Address Family: 1 (IPv4) 00:31:10.885 Subsystem Type: 2 (NVM Subsystem) 00:31:10.885 Entry Flags: 00:31:10.885 Duplicate Returned Information: 0 00:31:10.885 Explicit Persistent Connection Support for Discovery: 0 00:31:10.885 Transport Requirements: 00:31:10.886 Secure Channel: Not Required 00:31:10.886 Port ID: 0 (0x0000) 00:31:10.886 Controller ID: 65535 (0xffff) 00:31:10.886 Admin Max SQ Size: 128 00:31:10.886 Transport Service Identifier: 4420 00:31:10.886 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:10.886 Transport Address: 10.0.0.2 [2024-11-05 12:45:39.858164] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:10.886 [2024-11-05 12:45:39.858186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf480) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.886 [2024-11-05 12:45:39.858208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf600) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.886 [2024-11-05 12:45:39.858223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf780) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.886 [2024-11-05 12:45:39.858239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.886 [2024-11-05 12:45:39.858260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.858300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.858325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.858458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.858471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.858478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.858529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.858556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.858648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.858663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.858670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858686] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:10.886 [2024-11-05 12:45:39.858694] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:10.886 [2024-11-05 12:45:39.858714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.858741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.858762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.858840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.858854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.858869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.858895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.858911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.858921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.858943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.859043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.859058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.859065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.859089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.859115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.859136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.859242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.859256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.859264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.859287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.859314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.859334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.859412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.859426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.859433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.859461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.859489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.859510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.859610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.859622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.859629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.859652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.859678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.859699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.859773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.859786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.859793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.859816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.859842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.859872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.859948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.859961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.859968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.859975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.886 [2024-11-05 12:45:39.859991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.860000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.886 [2024-11-05 12:45:39.860007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.886 [2024-11-05 12:45:39.860017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.886 [2024-11-05 12:45:39.860038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.886 [2024-11-05 12:45:39.860112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.886 [2024-11-05 12:45:39.860126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.886 [2024-11-05 12:45:39.860133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.860156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.887 [2024-11-05 12:45:39.860187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.860208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.887 [2024-11-05 12:45:39.860282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.860294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.860301] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.860324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.887 [2024-11-05 12:45:39.860350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.860371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.887 [2024-11-05 12:45:39.860443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.860456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.860463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.860486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.887 [2024-11-05 12:45:39.860512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.860533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.887 [2024-11-05 12:45:39.860607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.860619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.860626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.860648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.887 [2024-11-05 12:45:39.860675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.860695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.887 [2024-11-05 12:45:39.860769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.860781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.860788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.860811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.860827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.887 [2024-11-05 12:45:39.860841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.864867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.887 [2024-11-05 12:45:39.864889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.864901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.864908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.864915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.864933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.864943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.864949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1653d80) 00:31:10.887 [2024-11-05 12:45:39.864960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.864982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16bf900, cid 3, qid 0 00:31:10.887 [2024-11-05 12:45:39.865110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.865125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.865132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.865139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16bf900) on tqpair=0x1653d80 00:31:10.887 [2024-11-05 12:45:39.865153] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:31:10.887 00:31:10.887 12:45:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:10.887 [2024-11-05 12:45:39.901585] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:31:10.887 [2024-11-05 12:45:39.901631] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752340 ] 00:31:10.887 [2024-11-05 12:45:39.952526] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:10.887 [2024-11-05 12:45:39.952580] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:10.887 [2024-11-05 12:45:39.952590] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:10.887 [2024-11-05 12:45:39.952604] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:10.887 [2024-11-05 12:45:39.952616] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:10.887 [2024-11-05 12:45:39.956136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:10.887 [2024-11-05 12:45:39.956191] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x993d80 0 00:31:10.887 [2024-11-05 12:45:39.963114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:10.887 [2024-11-05 12:45:39.963135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:10.887 [2024-11-05 12:45:39.963143] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:10.887 [2024-11-05 12:45:39.963149] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:10.887 [2024-11-05 12:45:39.963199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.963212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.963218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.887 [2024-11-05 12:45:39.963232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:10.887 [2024-11-05 12:45:39.963257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.887 [2024-11-05 12:45:39.969882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.969901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.969908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.969915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.887 [2024-11-05 12:45:39.969935] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:10.887 [2024-11-05 12:45:39.969946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:10.887 [2024-11-05 12:45:39.969955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:10.887 [2024-11-05 12:45:39.969973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.969984] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.969991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.887 [2024-11-05 12:45:39.970002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.970025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.887 [2024-11-05 12:45:39.970137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.970165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.970172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.970182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.887 [2024-11-05 12:45:39.970190] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:10.887 [2024-11-05 12:45:39.970203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:10.887 [2024-11-05 12:45:39.970215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.970222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.970231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.887 [2024-11-05 12:45:39.970241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.887 [2024-11-05 12:45:39.970262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.887 [2024-11-05 12:45:39.970351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.887 [2024-11-05 12:45:39.970365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.887 [2024-11-05 12:45:39.970372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.887 [2024-11-05 12:45:39.970378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:39.970386] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:10.888 [2024-11-05 12:45:39.970401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:10.888 [2024-11-05 12:45:39.970414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:39.970441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.888 [2024-11-05 12:45:39.970462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.888 [2024-11-05 12:45:39.970548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.888 [2024-11-05 12:45:39.970562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.888 [2024-11-05 12:45:39.970569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:39.970583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:10.888 [2024-11-05 12:45:39.970601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:39.970626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.888 [2024-11-05 12:45:39.970646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.888 [2024-11-05 12:45:39.970731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.888 [2024-11-05 12:45:39.970745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.888 [2024-11-05 12:45:39.970752] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:39.970766] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:10.888 [2024-11-05 12:45:39.970776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:10.888 [2024-11-05 12:45:39.970789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:10.888 [2024-11-05 12:45:39.970900] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:10.888 [2024-11-05 12:45:39.970911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:10.888 [2024-11-05 12:45:39.970939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.970952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:39.970962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.888 [2024-11-05 12:45:39.970983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.888 [2024-11-05 12:45:39.971084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.888 [2024-11-05 12:45:39.971099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.888 [2024-11-05 12:45:39.971106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:39.971121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:10.888 [2024-11-05 12:45:39.971159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:39.971186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.888 [2024-11-05 12:45:39.971206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.888 [2024-11-05 12:45:39.971293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.888 [2024-11-05 12:45:39.971307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.888 [2024-11-05 12:45:39.971314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:39.971327] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:10.888 [2024-11-05 12:45:39.971335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:10.888 [2024-11-05 12:45:39.971350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:10.888 [2024-11-05 12:45:39.971366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:10.888 [2024-11-05 12:45:39.971379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:39.971397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.888 [2024-11-05 12:45:39.971417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.888 [2024-11-05 12:45:39.971534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.888 [2024-11-05 12:45:39.971549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.888 [2024-11-05 12:45:39.971556] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971562] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=4096, cccid=0 00:31:10.888 [2024-11-05 12:45:39.971569] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ff480) on tqpair(0x993d80): expected_datao=0, payload_size=4096 00:31:10.888 [2024-11-05 12:45:39.971584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971603] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:39.971613] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.012937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.888 [2024-11-05 12:45:40.012958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.888 [2024-11-05 12:45:40.012966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.012973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:40.012984] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:10.888 [2024-11-05 12:45:40.012993] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:10.888 [2024-11-05 12:45:40.013001] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:10.888 [2024-11-05 12:45:40.013008] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:10.888 [2024-11-05 12:45:40.013021] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:10.888 [2024-11-05 12:45:40.013034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:10.888 [2024-11-05 12:45:40.013052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:10.888 [2024-11-05 12:45:40.013065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:40.013091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:10.888 [2024-11-05 12:45:40.013114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.888 [2024-11-05 12:45:40.013202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.888 [2024-11-05 12:45:40.013217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.888 [2024-11-05 12:45:40.013224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.888 [2024-11-05 12:45:40.013248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:40.013275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.888 [2024-11-05 12:45:40.013284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:40.013306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.888 [2024-11-05 12:45:40.013315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:40.013337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.888 [2024-11-05 12:45:40.013346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.888 [2024-11-05 12:45:40.013359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.888 [2024-11-05 12:45:40.013367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.888 [2024-11-05 12:45:40.013376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.013409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.013421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.013428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.889 [2024-11-05 12:45:40.013438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.889 [2024-11-05 12:45:40.013475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff480, cid 0, qid 0 00:31:10.889 [2024-11-05 12:45:40.013491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff600, cid 1, qid 0 00:31:10.889 [2024-11-05 12:45:40.013500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff780, cid 2, qid 0 00:31:10.889 [2024-11-05 12:45:40.013507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.889 [2024-11-05 12:45:40.013515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.889 [2024-11-05 12:45:40.013631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.889 [2024-11-05 12:45:40.013655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.889 [2024-11-05 12:45:40.013662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.013669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.889 [2024-11-05 12:45:40.013681] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:10.889 [2024-11-05 12:45:40.013691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.013718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.013730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.013740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.013748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.013754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.889 [2024-11-05 12:45:40.013764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:10.889 [2024-11-05 12:45:40.013786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.889 [2024-11-05 12:45:40.013910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.889 [2024-11-05 12:45:40.013925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.889 [2024-11-05 12:45:40.013932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.013939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.889 [2024-11-05 12:45:40.014013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.014036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.014051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.889 [2024-11-05 12:45:40.014069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.889 [2024-11-05 12:45:40.014091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.889 [2024-11-05 12:45:40.014244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.889 [2024-11-05 12:45:40.014260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.889 [2024-11-05 12:45:40.014270] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014277] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=4096, cccid=4 00:31:10.889 [2024-11-05 12:45:40.014285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ffa80) on tqpair(0x993d80): expected_datao=0, payload_size=4096 00:31:10.889 [2024-11-05 12:45:40.014305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014317] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.889 [2024-11-05 12:45:40.014348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.889 [2024-11-05 12:45:40.014355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.889 [2024-11-05 12:45:40.014387] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:10.889 [2024-11-05 12:45:40.014404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.014426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.014440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.889 [2024-11-05 12:45:40.014458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.889 [2024-11-05 12:45:40.014480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.889 [2024-11-05 12:45:40.014620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.889 [2024-11-05 12:45:40.014635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.889 [2024-11-05 12:45:40.014643] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014649] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=4096, cccid=4 00:31:10.889 [2024-11-05 12:45:40.014660] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ffa80) on tqpair(0x993d80): expected_datao=0, payload_size=4096 00:31:10.889 [2024-11-05 12:45:40.014675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014687] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.889 [2024-11-05 12:45:40.014726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.889 [2024-11-05 12:45:40.014733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.889 [2024-11-05 12:45:40.014761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.014793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.014808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.014816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.889 [2024-11-05 12:45:40.014827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.889 [2024-11-05 12:45:40.014849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.889 [2024-11-05 12:45:40.015007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.889 [2024-11-05 12:45:40.015023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.889 [2024-11-05 12:45:40.015033] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.015044] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=4096, cccid=4 00:31:10.889 [2024-11-05 12:45:40.015060] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ffa80) on tqpair(0x993d80): expected_datao=0, payload_size=4096 00:31:10.889 [2024-11-05 12:45:40.015069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.015080] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.015088] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.015100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.889 [2024-11-05 12:45:40.015109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.889 [2024-11-05 12:45:40.015116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.889 [2024-11-05 12:45:40.015122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.889 [2024-11-05 12:45:40.015135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.015151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.015167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.015179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:10.889 [2024-11-05 12:45:40.015187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:10.890 [2024-11-05 12:45:40.015196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:10.890 [2024-11-05 12:45:40.015204] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:10.890 [2024-11-05 12:45:40.015212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:10.890 [2024-11-05 12:45:40.015221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:10.890 [2024-11-05 12:45:40.015253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.015272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.015283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.015305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:10.890 [2024-11-05 12:45:40.015331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.890 [2024-11-05 12:45:40.015343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffc00, cid 5, qid 0 00:31:10.890 [2024-11-05 12:45:40.015443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.015457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.015464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.015482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.015496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.015503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffc00) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.015528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.015548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.015569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffc00, cid 5, qid 0 00:31:10.890 [2024-11-05 12:45:40.015658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.015672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.015679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.015686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffc00) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.018887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.018903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.018914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.018937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffc00, cid 5, qid 0 00:31:10.890 [2024-11-05 12:45:40.019036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.019052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.019059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffc00) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.019084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.019104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.019125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffc00, cid 5, qid 0 00:31:10.890 [2024-11-05 12:45:40.019221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.019236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.019243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffc00) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.019277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.019300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.019314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.019332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.019344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.019367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.019385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x993d80) 00:31:10.890 [2024-11-05 12:45:40.019420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.890 [2024-11-05 12:45:40.019442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffc00, cid 5, qid 0 00:31:10.890 [2024-11-05 12:45:40.019454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffa80, cid 4, qid 0 00:31:10.890 [2024-11-05 12:45:40.019477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ffd80, cid 6, qid 0 00:31:10.890 [2024-11-05 12:45:40.019485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9fff00, cid 7, qid 0 00:31:10.890 [2024-11-05 12:45:40.019675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.890 [2024-11-05 12:45:40.019692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.890 [2024-11-05 12:45:40.019700] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019706] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=8192, cccid=5 00:31:10.890 [2024-11-05 12:45:40.019715] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ffc00) on tqpair(0x993d80): expected_datao=0, payload_size=8192 00:31:10.890 [2024-11-05 12:45:40.019731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019753] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019763] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.890 [2024-11-05 12:45:40.019787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.890 [2024-11-05 12:45:40.019794] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=512, cccid=4 00:31:10.890 [2024-11-05 12:45:40.019808] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ffa80) on tqpair(0x993d80): expected_datao=0, payload_size=512 00:31:10.890 [2024-11-05 12:45:40.019815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019825] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019833] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.890 [2024-11-05 12:45:40.019851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.890 [2024-11-05 12:45:40.019857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019872] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=512, cccid=6 00:31:10.890 [2024-11-05 12:45:40.019880] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ffd80) on tqpair(0x993d80): expected_datao=0, payload_size=512 00:31:10.890 [2024-11-05 12:45:40.019887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019897] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:10.890 [2024-11-05 12:45:40.019922] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:10.890 [2024-11-05 12:45:40.019928] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019934] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x993d80): datao=0, datal=4096, cccid=7 00:31:10.890 [2024-11-05 12:45:40.019946] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9fff00) on tqpair(0x993d80): expected_datao=0, payload_size=4096 00:31:10.890 [2024-11-05 12:45:40.019955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019965] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019972] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.019981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.019990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.019996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.020004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffc00) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.020023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.020035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.020042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.020048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffa80) on tqpair=0x993d80 00:31:10.890 [2024-11-05 12:45:40.020064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.890 [2024-11-05 12:45:40.020075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.890 [2024-11-05 12:45:40.020082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.890 [2024-11-05 12:45:40.020089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ffd80) on tqpair=0x993d80 00:31:10.891 [2024-11-05 12:45:40.020099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.891 [2024-11-05 12:45:40.020109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.891 [2024-11-05 12:45:40.020116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.891 [2024-11-05 12:45:40.020122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9fff00) on tqpair=0x993d80 00:31:10.891 ===================================================== 00:31:10.891 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.891 ===================================================== 00:31:10.891 Controller Capabilities/Features 00:31:10.891 ================================ 00:31:10.891 Vendor ID: 8086 00:31:10.891 Subsystem Vendor ID: 8086 00:31:10.891 Serial Number: SPDK00000000000001 00:31:10.891 Model Number: SPDK bdev Controller 00:31:10.891 Firmware Version: 25.01 00:31:10.891 Recommended Arb Burst: 6 00:31:10.891 IEEE OUI Identifier: e4 d2 5c 00:31:10.891 Multi-path I/O 00:31:10.891 May have multiple subsystem ports: Yes 00:31:10.891 May have multiple controllers: Yes 00:31:10.891 Associated with SR-IOV VF: No 00:31:10.891 Max Data Transfer Size: 131072 00:31:10.891 Max Number of Namespaces: 32 00:31:10.891 Max Number of I/O Queues: 127 00:31:10.891 NVMe Specification Version (VS): 1.3 00:31:10.891 NVMe Specification Version (Identify): 1.3 00:31:10.891 Maximum Queue Entries: 128 00:31:10.891 Contiguous Queues Required: Yes 00:31:10.891 Arbitration Mechanisms Supported 00:31:10.891 Weighted Round Robin: Not Supported 00:31:10.891 Vendor Specific: Not Supported 00:31:10.891 Reset Timeout: 15000 ms 00:31:10.891 Doorbell Stride: 4 bytes 00:31:10.891 NVM Subsystem Reset: Not Supported 00:31:10.891 Command Sets Supported 00:31:10.891 NVM Command Set: Supported 00:31:10.891 Boot Partition: Not Supported 00:31:10.891 Memory Page Size Minimum: 4096 bytes 00:31:10.891 Memory Page Size Maximum: 4096 bytes 00:31:10.891 Persistent Memory Region: Not Supported 00:31:10.891 Optional Asynchronous Events Supported 00:31:10.891 Namespace Attribute Notices: Supported 00:31:10.891 Firmware Activation Notices: Not Supported 00:31:10.891 ANA Change Notices: Not Supported 00:31:10.891 PLE Aggregate Log Change Notices: Not Supported 00:31:10.891 LBA Status Info Alert Notices: Not Supported 00:31:10.891 EGE Aggregate Log Change Notices: Not Supported 00:31:10.891 Normal NVM Subsystem Shutdown event: Not Supported 00:31:10.891 Zone Descriptor Change Notices: Not Supported 00:31:10.891 Discovery Log Change Notices: Not Supported 00:31:10.891 Controller Attributes 00:31:10.891 128-bit Host Identifier: Supported 00:31:10.891 Non-Operational Permissive Mode: Not Supported 00:31:10.891 NVM Sets: Not Supported 00:31:10.891 Read Recovery Levels: Not Supported 00:31:10.891 Endurance Groups: Not Supported 00:31:10.891 Predictable Latency Mode: Not Supported 00:31:10.891 Traffic Based Keep ALive: Not Supported 00:31:10.891 Namespace Granularity: Not Supported 00:31:10.891 SQ Associations: Not Supported 00:31:10.891 UUID List: Not Supported 00:31:10.891 Multi-Domain Subsystem: Not Supported 00:31:10.891 Fixed Capacity Management: Not Supported 00:31:10.891 Variable Capacity Management: Not Supported 00:31:10.891 Delete Endurance Group: Not Supported 00:31:10.891 Delete NVM Set: Not Supported 00:31:10.891 Extended LBA Formats Supported: Not Supported 00:31:10.891 Flexible Data Placement Supported: Not Supported 00:31:10.891 00:31:10.891 Controller Memory Buffer Support 00:31:10.891 ================================ 00:31:10.891 Supported: No 00:31:10.891 00:31:10.891 Persistent Memory Region Support 00:31:10.891 ================================ 00:31:10.891 Supported: No 00:31:10.891 00:31:10.891 Admin Command Set Attributes 00:31:10.891 ============================ 00:31:10.891 Security Send/Receive: Not Supported 00:31:10.891 Format NVM: Not Supported 00:31:10.891 Firmware Activate/Download: Not Supported 00:31:10.891 Namespace Management: Not Supported 00:31:10.891 Device Self-Test: Not Supported 00:31:10.891 Directives: Not Supported 00:31:10.891 NVMe-MI: Not Supported 00:31:10.891 Virtualization Management: Not Supported 00:31:10.891 Doorbell Buffer Config: Not Supported 00:31:10.891 Get LBA Status Capability: Not Supported 00:31:10.891 Command & Feature Lockdown Capability: Not Supported 00:31:10.891 Abort Command Limit: 4 00:31:10.891 Async Event Request Limit: 4 00:31:10.891 Number of Firmware Slots: N/A 00:31:10.891 Firmware Slot 1 Read-Only: N/A 00:31:10.891 Firmware Activation Without Reset: N/A 00:31:10.891 Multiple Update Detection Support: N/A 00:31:10.891 Firmware Update Granularity: No Information Provided 00:31:10.891 Per-Namespace SMART Log: No 00:31:10.891 Asymmetric Namespace Access Log Page: Not Supported 00:31:10.891 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:10.891 Command Effects Log Page: Supported 00:31:10.891 Get Log Page Extended Data: Supported 00:31:10.891 Telemetry Log Pages: Not Supported 00:31:10.891 Persistent Event Log Pages: Not Supported 00:31:10.891 Supported Log Pages Log Page: May Support 00:31:10.891 Commands Supported & Effects Log Page: Not Supported 00:31:10.891 Feature Identifiers & Effects Log Page:May Support 00:31:10.891 NVMe-MI Commands & Effects Log Page: May Support 00:31:10.891 Data Area 4 for Telemetry Log: Not Supported 00:31:10.891 Error Log Page Entries Supported: 128 00:31:10.891 Keep Alive: Supported 00:31:10.891 Keep Alive Granularity: 10000 ms 00:31:10.891 00:31:10.891 NVM Command Set Attributes 00:31:10.891 ========================== 00:31:10.891 Submission Queue Entry Size 00:31:10.891 Max: 64 00:31:10.891 Min: 64 00:31:10.891 Completion Queue Entry Size 00:31:10.891 Max: 16 00:31:10.891 Min: 16 00:31:10.891 Number of Namespaces: 32 00:31:10.891 Compare Command: Supported 00:31:10.891 Write Uncorrectable Command: Not Supported 00:31:10.891 Dataset Management Command: Supported 00:31:10.891 Write Zeroes Command: Supported 00:31:10.891 Set Features Save Field: Not Supported 00:31:10.891 Reservations: Supported 00:31:10.891 Timestamp: Not Supported 00:31:10.891 Copy: Supported 00:31:10.891 Volatile Write Cache: Present 00:31:10.891 Atomic Write Unit (Normal): 1 00:31:10.891 Atomic Write Unit (PFail): 1 00:31:10.891 Atomic Compare & Write Unit: 1 00:31:10.891 Fused Compare & Write: Supported 00:31:10.891 Scatter-Gather List 00:31:10.891 SGL Command Set: Supported 00:31:10.891 SGL Keyed: Supported 00:31:10.891 SGL Bit Bucket Descriptor: Not Supported 00:31:10.891 SGL Metadata Pointer: Not Supported 00:31:10.891 Oversized SGL: Not Supported 00:31:10.891 SGL Metadata Address: Not Supported 00:31:10.891 SGL Offset: Supported 00:31:10.891 Transport SGL Data Block: Not Supported 00:31:10.891 Replay Protected Memory Block: Not Supported 00:31:10.891 00:31:10.891 Firmware Slot Information 00:31:10.891 ========================= 00:31:10.891 Active slot: 1 00:31:10.891 Slot 1 Firmware Revision: 25.01 00:31:10.891 00:31:10.891 00:31:10.891 Commands Supported and Effects 00:31:10.891 ============================== 00:31:10.891 Admin Commands 00:31:10.891 -------------- 00:31:10.891 Get Log Page (02h): Supported 00:31:10.891 Identify (06h): Supported 00:31:10.891 Abort (08h): Supported 00:31:10.891 Set Features (09h): Supported 00:31:10.891 Get Features (0Ah): Supported 00:31:10.891 Asynchronous Event Request (0Ch): Supported 00:31:10.891 Keep Alive (18h): Supported 00:31:10.891 I/O Commands 00:31:10.891 ------------ 00:31:10.891 Flush (00h): Supported LBA-Change 00:31:10.891 Write (01h): Supported LBA-Change 00:31:10.891 Read (02h): Supported 00:31:10.891 Compare (05h): Supported 00:31:10.891 Write Zeroes (08h): Supported LBA-Change 00:31:10.891 Dataset Management (09h): Supported LBA-Change 00:31:10.891 Copy (19h): Supported LBA-Change 00:31:10.891 00:31:10.891 Error Log 00:31:10.891 ========= 00:31:10.891 00:31:10.891 Arbitration 00:31:10.891 =========== 00:31:10.891 Arbitration Burst: 1 00:31:10.891 00:31:10.891 Power Management 00:31:10.891 ================ 00:31:10.891 Number of Power States: 1 00:31:10.891 Current Power State: Power State #0 00:31:10.891 Power State #0: 00:31:10.891 Max Power: 0.00 W 00:31:10.891 Non-Operational State: Operational 00:31:10.891 Entry Latency: Not Reported 00:31:10.891 Exit Latency: Not Reported 00:31:10.891 Relative Read Throughput: 0 00:31:10.891 Relative Read Latency: 0 00:31:10.891 Relative Write Throughput: 0 00:31:10.891 Relative Write Latency: 0 00:31:10.891 Idle Power: Not Reported 00:31:10.891 Active Power: Not Reported 00:31:10.891 Non-Operational Permissive Mode: Not Supported 00:31:10.891 00:31:10.891 Health Information 00:31:10.891 ================== 00:31:10.891 Critical Warnings: 00:31:10.891 Available Spare Space: OK 00:31:10.891 Temperature: OK 00:31:10.891 Device Reliability: OK 00:31:10.891 Read Only: No 00:31:10.891 Volatile Memory Backup: OK 00:31:10.891 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:10.891 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:10.892 Available Spare: 0% 00:31:10.892 Available Spare Threshold: 0% 00:31:10.892 Life Percentage Used:[2024-11-05 12:45:40.020269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.020292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.020315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9fff00, cid 7, qid 0 00:31:10.892 [2024-11-05 12:45:40.020436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.020451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.020458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9fff00) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.020517] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:10.892 [2024-11-05 12:45:40.020540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff480) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.020551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.892 [2024-11-05 12:45:40.020561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff600) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.020569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.892 [2024-11-05 12:45:40.020577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff780) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.020585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.892 [2024-11-05 12:45:40.020592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.020604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:10.892 [2024-11-05 12:45:40.020619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.020659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.020681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.020792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.020807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.020814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.020835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.020850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.020868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.020899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.021014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.021029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.021036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.021051] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:10.892 [2024-11-05 12:45:40.021062] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:10.892 [2024-11-05 12:45:40.021078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.021105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.021126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.021216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.021232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.021239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.021262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.021291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.021311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.021408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.021423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.021430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.021455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.021482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.021503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.021596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.021611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.021618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.021643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.021670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.021692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.021777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.021791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.021798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.021823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021833] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.021849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.021879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.021966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.021980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.021987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.021994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.022013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.022039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.022060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.022152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.022166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.022177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.022204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.022230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.022251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.892 [2024-11-05 12:45:40.022343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.892 [2024-11-05 12:45:40.022357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.892 [2024-11-05 12:45:40.022364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.892 [2024-11-05 12:45:40.022390] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.892 [2024-11-05 12:45:40.022406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.892 [2024-11-05 12:45:40.022416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.892 [2024-11-05 12:45:40.022439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.893 [2024-11-05 12:45:40.022524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.893 [2024-11-05 12:45:40.022538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.893 [2024-11-05 12:45:40.022545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.022552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.893 [2024-11-05 12:45:40.022569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.022579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.022586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.893 [2024-11-05 12:45:40.022596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.893 [2024-11-05 12:45:40.022619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.893 [2024-11-05 12:45:40.022704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.893 [2024-11-05 12:45:40.022718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.893 [2024-11-05 12:45:40.022725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.022732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.893 [2024-11-05 12:45:40.022750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.022760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.022766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.893 [2024-11-05 12:45:40.022776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.893 [2024-11-05 12:45:40.022799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.893 [2024-11-05 12:45:40.026875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.893 [2024-11-05 12:45:40.026893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.893 [2024-11-05 12:45:40.026900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.026911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.893 [2024-11-05 12:45:40.026943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.026956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.026962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x993d80) 00:31:10.893 [2024-11-05 12:45:40.026973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:10.893 [2024-11-05 12:45:40.026995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ff900, cid 3, qid 0 00:31:10.893 [2024-11-05 12:45:40.027084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:10.893 [2024-11-05 12:45:40.027099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:10.893 [2024-11-05 12:45:40.027106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:10.893 [2024-11-05 12:45:40.027113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ff900) on tqpair=0x993d80 00:31:10.893 [2024-11-05 12:45:40.027130] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:31:10.893 0% 00:31:10.893 Data Units Read: 0 00:31:10.893 Data Units Written: 0 00:31:10.893 Host Read Commands: 0 00:31:10.893 Host Write Commands: 0 00:31:10.893 Controller Busy Time: 0 minutes 00:31:10.893 Power Cycles: 0 00:31:10.893 Power On Hours: 0 hours 00:31:10.893 Unsafe Shutdowns: 0 00:31:10.893 Unrecoverable Media Errors: 0 00:31:10.893 Lifetime Error Log Entries: 0 00:31:10.893 Warning Temperature Time: 0 minutes 00:31:10.893 Critical Temperature Time: 0 minutes 00:31:10.893 00:31:10.893 Number of Queues 00:31:10.893 ================ 00:31:10.893 Number of I/O Submission Queues: 127 00:31:10.893 Number of I/O Completion Queues: 127 00:31:10.893 00:31:10.893 Active Namespaces 00:31:10.893 ================= 00:31:10.893 Namespace ID:1 00:31:10.893 Error Recovery Timeout: Unlimited 00:31:10.893 Command Set Identifier: NVM (00h) 00:31:10.893 Deallocate: Supported 00:31:10.893 Deallocated/Unwritten Error: Not Supported 00:31:10.893 Deallocated Read Value: Unknown 00:31:10.893 Deallocate in Write Zeroes: Not Supported 00:31:10.893 Deallocated Guard Field: 0xFFFF 00:31:10.893 Flush: Supported 00:31:10.893 Reservation: Supported 00:31:10.893 Namespace Sharing Capabilities: Multiple Controllers 00:31:10.893 Size (in LBAs): 131072 (0GiB) 00:31:10.893 Capacity (in LBAs): 131072 (0GiB) 00:31:10.893 Utilization (in LBAs): 131072 (0GiB) 00:31:10.893 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:10.893 EUI64: ABCDEF0123456789 00:31:10.893 UUID: e3317978-db1c-4fd8-a8ae-b987962c1bd1 00:31:10.893 Thin Provisioning: Not Supported 00:31:10.893 Per-NS Atomic Units: Yes 00:31:10.893 Atomic Boundary Size (Normal): 0 00:31:10.893 Atomic Boundary Size (PFail): 0 00:31:10.893 Atomic Boundary Offset: 0 00:31:10.893 Maximum Single Source Range Length: 65535 00:31:10.893 Maximum Copy Length: 65535 00:31:10.893 Maximum Source Range Count: 1 00:31:10.893 NGUID/EUI64 Never Reused: No 00:31:10.893 Namespace Write Protected: No 00:31:10.893 Number of LBA Formats: 1 00:31:10.893 Current LBA Format: LBA Format #00 00:31:10.893 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:10.893 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:10.893 rmmod nvme_tcp 00:31:10.893 rmmod nvme_fabrics 00:31:10.893 rmmod nvme_keyring 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 752306 ']' 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 752306 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 752306 ']' 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 752306 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:31:10.893 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 752306 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 752306' 00:31:11.152 killing process with pid 752306 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 752306 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 752306 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.152 12:45:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.690 00:31:13.690 real 0m5.717s 00:31:13.690 user 0m4.755s 00:31:13.690 sys 0m2.036s 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.690 ************************************ 00:31:13.690 END TEST nvmf_identify 00:31:13.690 ************************************ 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:13.690 ************************************ 00:31:13.690 START TEST nvmf_perf 00:31:13.690 ************************************ 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:13.690 * Looking for test storage... 00:31:13.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:13.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.690 --rc genhtml_branch_coverage=1 00:31:13.690 --rc genhtml_function_coverage=1 00:31:13.690 --rc genhtml_legend=1 00:31:13.690 --rc geninfo_all_blocks=1 00:31:13.690 --rc geninfo_unexecuted_blocks=1 00:31:13.690 00:31:13.690 ' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:13.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.690 --rc genhtml_branch_coverage=1 00:31:13.690 --rc genhtml_function_coverage=1 00:31:13.690 --rc genhtml_legend=1 00:31:13.690 --rc geninfo_all_blocks=1 00:31:13.690 --rc geninfo_unexecuted_blocks=1 00:31:13.690 00:31:13.690 ' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:13.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.690 --rc genhtml_branch_coverage=1 00:31:13.690 --rc genhtml_function_coverage=1 00:31:13.690 --rc genhtml_legend=1 00:31:13.690 --rc geninfo_all_blocks=1 00:31:13.690 --rc geninfo_unexecuted_blocks=1 00:31:13.690 00:31:13.690 ' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:13.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:13.690 --rc genhtml_branch_coverage=1 00:31:13.690 --rc genhtml_function_coverage=1 00:31:13.690 --rc genhtml_legend=1 00:31:13.690 --rc geninfo_all_blocks=1 00:31:13.690 --rc geninfo_unexecuted_blocks=1 00:31:13.690 00:31:13.690 ' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.690 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:13.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.691 12:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:15.592 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:15.592 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:15.592 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.592 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:15.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.593 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:31:15.851 00:31:15.851 --- 10.0.0.2 ping statistics --- 00:31:15.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.851 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:31:15.851 00:31:15.851 --- 10.0.0.1 ping statistics --- 00:31:15.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.851 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=754395 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 754395 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 754395 ']' 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:15.851 12:45:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.851 [2024-11-05 12:45:44.944446] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:31:15.851 [2024-11-05 12:45:44.944524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.851 [2024-11-05 12:45:45.019269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.851 [2024-11-05 12:45:45.066980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.851 [2024-11-05 12:45:45.067033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.851 [2024-11-05 12:45:45.067058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.851 [2024-11-05 12:45:45.067069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.851 [2024-11-05 12:45:45.067078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.851 [2024-11-05 12:45:45.068648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.851 [2024-11-05 12:45:45.068707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.851 [2024-11-05 12:45:45.068735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.851 [2024-11-05 12:45:45.068738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:16.110 12:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:19.395 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:19.395 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:19.395 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:31:19.395 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.653 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:19.653 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:31:19.653 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:19.653 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:19.653 12:45:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:19.911 [2024-11-05 12:45:49.148049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.169 12:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.427 12:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:20.427 12:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.684 12:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:20.684 12:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:20.941 12:45:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.198 [2024-11-05 12:45:50.247196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.198 12:45:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.457 12:45:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:31:21.457 12:45:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:21.457 12:45:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:21.457 12:45:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:31:22.831 Initializing NVMe Controllers 00:31:22.831 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:31:22.831 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:31:22.831 Initialization complete. Launching workers. 00:31:22.831 ======================================================== 00:31:22.831 Latency(us) 00:31:22.831 Device Information : IOPS MiB/s Average min max 00:31:22.831 PCIE (0000:88:00.0) NSID 1 from core 0: 86290.97 337.07 370.35 33.26 5261.22 00:31:22.831 ======================================================== 00:31:22.831 Total : 86290.97 337.07 370.35 33.26 5261.22 00:31:22.831 00:31:22.831 12:45:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.208 Initializing NVMe Controllers 00:31:24.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:24.208 Initialization complete. Launching workers. 00:31:24.208 ======================================================== 00:31:24.208 Latency(us) 00:31:24.208 Device Information : IOPS MiB/s Average min max 00:31:24.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.63 0.40 10072.64 153.23 45809.18 00:31:24.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 70.75 0.28 14807.89 4970.53 54858.12 00:31:24.208 ======================================================== 00:31:24.208 Total : 173.38 0.68 12004.84 153.23 54858.12 00:31:24.208 00:31:24.208 12:45:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:25.146 Initializing NVMe Controllers 00:31:25.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:25.146 Initialization complete. Launching workers. 00:31:25.146 ======================================================== 00:31:25.146 Latency(us) 00:31:25.146 Device Information : IOPS MiB/s Average min max 00:31:25.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8337.54 32.57 3852.92 737.94 9323.69 00:31:25.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3793.79 14.82 8482.63 5234.05 16735.93 00:31:25.146 ======================================================== 00:31:25.146 Total : 12131.33 47.39 5300.76 737.94 16735.93 00:31:25.146 00:31:25.146 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:25.146 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:25.146 12:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:27.677 Initializing NVMe Controllers 00:31:27.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.677 Controller IO queue size 128, less than required. 00:31:27.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.677 Controller IO queue size 128, less than required. 00:31:27.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:27.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:27.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:27.677 Initialization complete. Launching workers. 00:31:27.677 ======================================================== 00:31:27.677 Latency(us) 00:31:27.677 Device Information : IOPS MiB/s Average min max 00:31:27.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1679.20 419.80 76656.15 49055.13 125806.40 00:31:27.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 558.90 139.73 236798.99 102560.57 398995.95 00:31:27.677 ======================================================== 00:31:27.677 Total : 2238.10 559.53 116647.19 49055.13 398995.95 00:31:27.677 00:31:27.934 12:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:28.192 No valid NVMe controllers or AIO or URING devices found 00:31:28.192 Initializing NVMe Controllers 00:31:28.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.192 Controller IO queue size 128, less than required. 00:31:28.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.192 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:28.192 Controller IO queue size 128, less than required. 00:31:28.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.192 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:28.192 WARNING: Some requested NVMe devices were skipped 00:31:28.192 12:45:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:31.478 Initializing NVMe Controllers 00:31:31.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.478 Controller IO queue size 128, less than required. 00:31:31.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:31.478 Controller IO queue size 128, less than required. 00:31:31.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:31.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:31.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:31.478 Initialization complete. Launching workers. 00:31:31.478 00:31:31.478 ==================== 00:31:31.478 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:31.478 TCP transport: 00:31:31.478 polls: 8886 00:31:31.478 idle_polls: 5479 00:31:31.478 sock_completions: 3407 00:31:31.478 nvme_completions: 6331 00:31:31.478 submitted_requests: 9518 00:31:31.478 queued_requests: 1 00:31:31.478 00:31:31.478 ==================== 00:31:31.478 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:31.478 TCP transport: 00:31:31.478 polls: 11929 00:31:31.478 idle_polls: 8942 00:31:31.478 sock_completions: 2987 00:31:31.478 nvme_completions: 5723 00:31:31.478 submitted_requests: 8600 00:31:31.478 queued_requests: 1 00:31:31.478 ======================================================== 00:31:31.478 Latency(us) 00:31:31.478 Device Information : IOPS MiB/s Average min max 00:31:31.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1579.15 394.79 83301.06 43210.81 120351.21 00:31:31.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1427.47 356.87 89394.65 46451.42 136809.52 00:31:31.478 ======================================================== 00:31:31.478 Total : 3006.62 751.65 86194.15 43210.81 136809.52 00:31:31.478 00:31:31.478 12:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:31.478 12:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:31.478 12:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:31.478 12:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:31:31.478 12:46:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0b5d2bee-41b9-4c86-b71c-959ae45f8290 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0b5d2bee-41b9-4c86-b71c-959ae45f8290 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=0b5d2bee-41b9-4c86-b71c-959ae45f8290 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:31:34.761 { 00:31:34.761 "uuid": "0b5d2bee-41b9-4c86-b71c-959ae45f8290", 00:31:34.761 "name": "lvs_0", 00:31:34.761 "base_bdev": "Nvme0n1", 00:31:34.761 "total_data_clusters": 238234, 00:31:34.761 "free_clusters": 238234, 00:31:34.761 "block_size": 512, 00:31:34.761 "cluster_size": 4194304 00:31:34.761 } 00:31:34.761 ]' 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="0b5d2bee-41b9-4c86-b71c-959ae45f8290") .free_clusters' 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=238234 00:31:34.761 12:46:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="0b5d2bee-41b9-4c86-b71c-959ae45f8290") .cluster_size' 00:31:35.038 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:31:35.038 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=952936 00:31:35.038 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 952936 00:31:35.038 952936 00:31:35.038 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:35.038 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:35.038 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0b5d2bee-41b9-4c86-b71c-959ae45f8290 lbd_0 20480 00:31:35.638 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=fe924f86-4abd-4f71-83ac-aed5a29cb39f 00:31:35.638 12:46:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore fe924f86-4abd-4f71-83ac-aed5a29cb39f lvs_n_0 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d9862923-d972-44a5-bfff-f84e1255089e 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d9862923-d972-44a5-bfff-f84e1255089e 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=d9862923-d972-44a5-bfff-f84e1255089e 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:31:36.573 { 00:31:36.573 "uuid": "0b5d2bee-41b9-4c86-b71c-959ae45f8290", 00:31:36.573 "name": "lvs_0", 00:31:36.573 "base_bdev": "Nvme0n1", 00:31:36.573 "total_data_clusters": 238234, 00:31:36.573 "free_clusters": 233114, 00:31:36.573 "block_size": 512, 00:31:36.573 "cluster_size": 4194304 00:31:36.573 }, 00:31:36.573 { 00:31:36.573 "uuid": "d9862923-d972-44a5-bfff-f84e1255089e", 00:31:36.573 "name": "lvs_n_0", 00:31:36.573 "base_bdev": "fe924f86-4abd-4f71-83ac-aed5a29cb39f", 00:31:36.573 "total_data_clusters": 5114, 00:31:36.573 "free_clusters": 5114, 00:31:36.573 "block_size": 512, 00:31:36.573 "cluster_size": 4194304 00:31:36.573 } 00:31:36.573 ]' 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="d9862923-d972-44a5-bfff-f84e1255089e") .free_clusters' 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=5114 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="d9862923-d972-44a5-bfff-f84e1255089e") .cluster_size' 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=20456 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 20456 00:31:36.573 20456 00:31:36.573 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:36.832 12:46:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d9862923-d972-44a5-bfff-f84e1255089e lbd_nest_0 20456 00:31:37.090 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=1978324c-b82f-4078-8cb1-d3b0a7846930 00:31:37.090 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.348 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:37.348 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 1978324c-b82f-4078-8cb1-d3b0a7846930 00:31:37.605 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.865 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:37.865 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:37.865 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:37.865 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:37.865 12:46:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:50.079 Initializing NVMe Controllers 00:31:50.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:50.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:50.079 Initialization complete. Launching workers. 00:31:50.079 ======================================================== 00:31:50.079 Latency(us) 00:31:50.079 Device Information : IOPS MiB/s Average min max 00:31:50.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.49 0.02 22024.90 175.69 46595.47 00:31:50.079 ======================================================== 00:31:50.079 Total : 45.49 0.02 22024.90 175.69 46595.47 00:31:50.079 00:31:50.079 12:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:50.079 12:46:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:00.050 Initializing NVMe Controllers 00:32:00.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:00.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:00.050 Initialization complete. Launching workers. 00:32:00.050 ======================================================== 00:32:00.050 Latency(us) 00:32:00.050 Device Information : IOPS MiB/s Average min max 00:32:00.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.50 9.31 13422.95 5050.15 51822.84 00:32:00.050 ======================================================== 00:32:00.050 Total : 74.50 9.31 13422.95 5050.15 51822.84 00:32:00.050 00:32:00.050 12:46:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:00.050 12:46:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:00.050 12:46:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:10.025 Initializing NVMe Controllers 00:32:10.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:10.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:10.025 Initialization complete. Launching workers. 00:32:10.025 ======================================================== 00:32:10.025 Latency(us) 00:32:10.025 Device Information : IOPS MiB/s Average min max 00:32:10.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7639.80 3.73 4190.01 327.18 11009.08 00:32:10.025 ======================================================== 00:32:10.025 Total : 7639.80 3.73 4190.01 327.18 11009.08 00:32:10.025 00:32:10.025 12:46:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:10.025 12:46:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:20.006 Initializing NVMe Controllers 00:32:20.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:20.006 Initialization complete. Launching workers. 00:32:20.006 ======================================================== 00:32:20.006 Latency(us) 00:32:20.006 Device Information : IOPS MiB/s Average min max 00:32:20.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3947.13 493.39 8108.31 768.36 18140.19 00:32:20.006 ======================================================== 00:32:20.006 Total : 3947.13 493.39 8108.31 768.36 18140.19 00:32:20.006 00:32:20.006 12:46:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:20.006 12:46:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:20.006 12:46:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:29.997 Initializing NVMe Controllers 00:32:29.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:29.997 Controller IO queue size 128, less than required. 00:32:29.997 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:29.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:29.997 Initialization complete. Launching workers. 00:32:29.997 ======================================================== 00:32:29.997 Latency(us) 00:32:29.997 Device Information : IOPS MiB/s Average min max 00:32:29.997 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11713.03 5.72 10927.75 1505.84 26752.57 00:32:29.997 ======================================================== 00:32:29.997 Total : 11713.03 5.72 10927.75 1505.84 26752.57 00:32:29.997 00:32:29.997 12:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:29.997 12:46:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:39.993 Initializing NVMe Controllers 00:32:39.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:39.993 Controller IO queue size 128, less than required. 00:32:39.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:39.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:39.993 Initialization complete. Launching workers. 00:32:39.993 ======================================================== 00:32:39.993 Latency(us) 00:32:39.993 Device Information : IOPS MiB/s Average min max 00:32:39.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1180.30 147.54 109058.82 31546.22 229353.58 00:32:39.993 ======================================================== 00:32:39.993 Total : 1180.30 147.54 109058.82 31546.22 229353.58 00:32:39.993 00:32:39.993 12:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:40.251 12:47:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1978324c-b82f-4078-8cb1-d3b0a7846930 00:32:41.189 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:41.190 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fe924f86-4abd-4f71-83ac-aed5a29cb39f 00:32:41.759 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:41.759 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:41.759 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:41.759 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.759 12:47:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.019 rmmod nvme_tcp 00:32:42.019 rmmod nvme_fabrics 00:32:42.019 rmmod nvme_keyring 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 754395 ']' 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 754395 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 754395 ']' 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 754395 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 754395 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 754395' 00:32:42.019 killing process with pid 754395 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 754395 00:32:42.019 12:47:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 754395 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.433 12:47:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.974 00:32:45.974 real 1m32.214s 00:32:45.974 user 5m41.116s 00:32:45.974 sys 0m15.711s 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:45.974 ************************************ 00:32:45.974 END TEST nvmf_perf 00:32:45.974 ************************************ 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.974 ************************************ 00:32:45.974 START TEST nvmf_fio_host 00:32:45.974 ************************************ 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:45.974 * Looking for test storage... 00:32:45.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:45.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.974 --rc genhtml_branch_coverage=1 00:32:45.974 --rc genhtml_function_coverage=1 00:32:45.974 --rc genhtml_legend=1 00:32:45.974 --rc geninfo_all_blocks=1 00:32:45.974 --rc geninfo_unexecuted_blocks=1 00:32:45.974 00:32:45.974 ' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:45.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.974 --rc genhtml_branch_coverage=1 00:32:45.974 --rc genhtml_function_coverage=1 00:32:45.974 --rc genhtml_legend=1 00:32:45.974 --rc geninfo_all_blocks=1 00:32:45.974 --rc geninfo_unexecuted_blocks=1 00:32:45.974 00:32:45.974 ' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:45.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.974 --rc genhtml_branch_coverage=1 00:32:45.974 --rc genhtml_function_coverage=1 00:32:45.974 --rc genhtml_legend=1 00:32:45.974 --rc geninfo_all_blocks=1 00:32:45.974 --rc geninfo_unexecuted_blocks=1 00:32:45.974 00:32:45.974 ' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:45.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.974 --rc genhtml_branch_coverage=1 00:32:45.974 --rc genhtml_function_coverage=1 00:32:45.974 --rc genhtml_legend=1 00:32:45.974 --rc geninfo_all_blocks=1 00:32:45.974 --rc geninfo_unexecuted_blocks=1 00:32:45.974 00:32:45.974 ' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.974 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.975 12:47:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:47.880 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:47.880 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:47.880 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:47.880 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.880 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.881 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:48.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:32:48.139 00:32:48.139 --- 10.0.0.2 ping statistics --- 00:32:48.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.139 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:32:48.139 00:32:48.139 --- 10.0.0.1 ping statistics --- 00:32:48.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.139 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=766444 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 766444 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 766444 ']' 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:48.139 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.139 [2024-11-05 12:47:17.260076] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:32:48.139 [2024-11-05 12:47:17.260184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:48.139 [2024-11-05 12:47:17.334329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.397 [2024-11-05 12:47:17.381929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.397 [2024-11-05 12:47:17.381977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.397 [2024-11-05 12:47:17.381993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.397 [2024-11-05 12:47:17.382012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.397 [2024-11-05 12:47:17.382022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.397 [2024-11-05 12:47:17.383691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.397 [2024-11-05 12:47:17.383801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.397 [2024-11-05 12:47:17.383907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.397 [2024-11-05 12:47:17.383911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.397 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:48.397 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:32:48.397 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:48.655 [2024-11-05 12:47:17.755137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.655 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:48.655 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:48.655 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.655 12:47:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:48.913 Malloc1 00:32:48.913 12:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:49.170 12:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:49.428 12:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.686 [2024-11-05 12:47:18.884592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.686 12:47:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:32:49.944 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:50.203 12:47:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:50.203 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:50.203 fio-3.35 00:32:50.203 Starting 1 thread 00:32:52.728 00:32:52.728 test: (groupid=0, jobs=1): err= 0: pid=766856: Tue Nov 5 12:47:21 2024 00:32:52.728 read: IOPS=8753, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2007msec) 00:32:52.728 slat (usec): min=2, max=109, avg= 2.54, stdev= 1.43 00:32:52.728 clat (usec): min=2365, max=13808, avg=7993.80, stdev=664.26 00:32:52.728 lat (usec): min=2388, max=13810, avg=7996.34, stdev=664.19 00:32:52.728 clat percentiles (usec): 00:32:52.728 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:32:52.728 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:32:52.728 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:32:52.728 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[11338], 99.95th=[12387], 00:32:52.728 | 99.99th=[13173] 00:32:52.728 bw ( KiB/s): min=33960, max=35648, per=100.00%, avg=35014.00, stdev=731.01, samples=4 00:32:52.728 iops : min= 8490, max= 8912, avg=8753.50, stdev=182.75, samples=4 00:32:52.728 write: IOPS=8755, BW=34.2MiB/s (35.9MB/s)(68.6MiB/2007msec); 0 zone resets 00:32:52.728 slat (nsec): min=2154, max=97590, avg=2660.13, stdev=1243.71 00:32:52.728 clat (usec): min=941, max=13100, avg=6574.43, stdev=572.70 00:32:52.728 lat (usec): min=947, max=13103, avg=6577.09, stdev=572.67 00:32:52.728 clat percentiles (usec): 00:32:52.728 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:32:52.728 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:32:52.728 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:32:52.728 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[11338], 99.95th=[12387], 00:32:52.728 | 99.99th=[13042] 00:32:52.728 bw ( KiB/s): min=34760, max=35416, per=99.97%, avg=35012.00, stdev=283.71, samples=4 00:32:52.728 iops : min= 8690, max= 8854, avg=8753.00, stdev=70.93, samples=4 00:32:52.728 lat (usec) : 1000=0.01% 00:32:52.728 lat (msec) : 2=0.02%, 4=0.12%, 10=99.69%, 20=0.16% 00:32:52.728 cpu : usr=63.31%, sys=35.09%, ctx=90, majf=0, minf=36 00:32:52.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:52.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:52.728 issued rwts: total=17568,17572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:52.728 00:32:52.728 Run status group 0 (all jobs): 00:32:52.728 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (72.0MB), run=2007-2007msec 00:32:52.729 WRITE: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.6MiB (72.0MB), run=2007-2007msec 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:52.729 12:47:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:52.989 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:52.989 fio-3.35 00:32:52.989 Starting 1 thread 00:32:55.515 00:32:55.515 test: (groupid=0, jobs=1): err= 0: pid=767191: Tue Nov 5 12:47:24 2024 00:32:55.515 read: IOPS=8118, BW=127MiB/s (133MB/s)(254MiB/2006msec) 00:32:55.515 slat (usec): min=2, max=124, avg= 3.72, stdev= 2.19 00:32:55.515 clat (usec): min=2439, max=15884, avg=8849.32, stdev=1882.84 00:32:55.515 lat (usec): min=2442, max=15887, avg=8853.04, stdev=1882.85 00:32:55.515 clat percentiles (usec): 00:32:55.515 | 1.00th=[ 5014], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7242], 00:32:55.515 | 30.00th=[ 7767], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9372], 00:32:55.515 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11207], 95.00th=[12125], 00:32:55.515 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15401], 99.95th=[15533], 00:32:55.515 | 99.99th=[15795] 00:32:55.515 bw ( KiB/s): min=55648, max=77024, per=52.53%, avg=68240.00, stdev=9784.73, samples=4 00:32:55.515 iops : min= 3478, max= 4814, avg=4265.00, stdev=611.55, samples=4 00:32:55.515 write: IOPS=5039, BW=78.7MiB/s (82.6MB/s)(140MiB/1776msec); 0 zone resets 00:32:55.515 slat (usec): min=30, max=193, avg=34.42, stdev= 6.45 00:32:55.515 clat (usec): min=5720, max=20304, avg=11984.19, stdev=1856.94 00:32:55.515 lat (usec): min=5751, max=20335, avg=12018.60, stdev=1856.77 00:32:55.515 clat percentiles (usec): 00:32:55.515 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10421], 00:32:55.515 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:32:55.515 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14484], 95.00th=[15270], 00:32:55.515 | 99.00th=[16712], 99.50th=[17695], 99.90th=[19792], 99.95th=[20055], 00:32:55.515 | 99.99th=[20317] 00:32:55.515 bw ( KiB/s): min=59104, max=79680, per=88.13%, avg=71056.00, stdev=9583.29, samples=4 00:32:55.515 iops : min= 3694, max= 4980, avg=4441.00, stdev=598.96, samples=4 00:32:55.515 lat (msec) : 4=0.13%, 10=53.20%, 20=46.66%, 50=0.02% 00:32:55.515 cpu : usr=75.51%, sys=23.24%, ctx=44, majf=0, minf=66 00:32:55.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:55.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.515 issued rwts: total=16286,8950,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.515 00:32:55.515 Run status group 0 (all jobs): 00:32:55.515 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (267MB), run=2006-2006msec 00:32:55.515 WRITE: bw=78.7MiB/s (82.6MB/s), 78.7MiB/s-78.7MiB/s (82.6MB/s-82.6MB/s), io=140MiB (147MB), run=1776-1776msec 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:55.515 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:55.772 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:55.772 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:32:55.772 12:47:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:59.051 Nvme0n1 00:32:59.051 12:47:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=26afdfe1-b02f-4cb3-88f3-a7f3fb23d696 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 26afdfe1-b02f-4cb3-88f3-a7f3fb23d696 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=26afdfe1-b02f-4cb3-88f3-a7f3fb23d696 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:33:01.578 12:47:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:01.835 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:33:01.835 { 00:33:01.835 "uuid": "26afdfe1-b02f-4cb3-88f3-a7f3fb23d696", 00:33:01.835 "name": "lvs_0", 00:33:01.835 "base_bdev": "Nvme0n1", 00:33:01.835 "total_data_clusters": 930, 00:33:01.835 "free_clusters": 930, 00:33:01.835 "block_size": 512, 00:33:01.835 "cluster_size": 1073741824 00:33:01.835 } 00:33:01.835 ]' 00:33:01.835 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="26afdfe1-b02f-4cb3-88f3-a7f3fb23d696") .free_clusters' 00:33:02.093 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=930 00:33:02.093 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="26afdfe1-b02f-4cb3-88f3-a7f3fb23d696") .cluster_size' 00:33:02.093 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:33:02.093 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=952320 00:33:02.093 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 952320 00:33:02.093 952320 00:33:02.093 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:02.350 58e652de-b996-4daa-a5f7-22bafb573594 00:33:02.350 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:02.607 12:47:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:03.172 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:03.172 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.172 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.172 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:03.173 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:03.430 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:03.430 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:03.430 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:03.430 12:47:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.430 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:03.430 fio-3.35 00:33:03.430 Starting 1 thread 00:33:05.954 00:33:05.954 test: (groupid=0, jobs=1): err= 0: pid=768469: Tue Nov 5 12:47:34 2024 00:33:05.954 read: IOPS=5963, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2008msec) 00:33:05.954 slat (nsec): min=1917, max=167305, avg=2471.62, stdev=2330.23 00:33:05.954 clat (usec): min=849, max=171359, avg=11712.83, stdev=11688.42 00:33:05.954 lat (usec): min=852, max=171422, avg=11715.30, stdev=11688.82 00:33:05.954 clat percentiles (msec): 00:33:05.954 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:33:05.954 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:33:05.954 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:33:05.954 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:33:05.954 | 99.99th=[ 171] 00:33:05.954 bw ( KiB/s): min=16488, max=26328, per=99.72%, avg=23786.00, stdev=4866.00, samples=4 00:33:05.954 iops : min= 4122, max= 6582, avg=5946.50, stdev=1216.50, samples=4 00:33:05.954 write: IOPS=5950, BW=23.2MiB/s (24.4MB/s)(46.7MiB/2008msec); 0 zone resets 00:33:05.954 slat (usec): min=2, max=140, avg= 2.56, stdev= 1.68 00:33:05.954 clat (usec): min=332, max=169141, avg=9575.28, stdev=10955.41 00:33:05.954 lat (usec): min=336, max=169150, avg=9577.84, stdev=10955.77 00:33:05.954 clat percentiles (msec): 00:33:05.954 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:33:05.954 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:33:05.954 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:33:05.954 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:33:05.954 | 99.99th=[ 169] 00:33:05.954 bw ( KiB/s): min=17512, max=26144, per=100.00%, avg=23802.00, stdev=4197.92, samples=4 00:33:05.954 iops : min= 4378, max= 6536, avg=5950.50, stdev=1049.48, samples=4 00:33:05.954 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:33:05.954 lat (msec) : 2=0.03%, 4=0.13%, 10=55.98%, 20=43.31%, 250=0.54% 00:33:05.954 cpu : usr=60.34%, sys=38.22%, ctx=134, majf=0, minf=36 00:33:05.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:33:05.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.954 issued rwts: total=11974,11948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.954 00:33:05.954 Run status group 0 (all jobs): 00:33:05.954 READ: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.0MB), run=2008-2008msec 00:33:05.954 WRITE: bw=23.2MiB/s (24.4MB/s), 23.2MiB/s-23.2MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (48.9MB), run=2008-2008msec 00:33:05.954 12:47:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:06.212 12:47:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=21bdc84c-9d61-43b5-b6bc-f15918447c58 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 21bdc84c-9d61-43b5-b6bc-f15918447c58 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=21bdc84c-9d61-43b5-b6bc-f15918447c58 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:33:07.628 { 00:33:07.628 "uuid": "26afdfe1-b02f-4cb3-88f3-a7f3fb23d696", 00:33:07.628 "name": "lvs_0", 00:33:07.628 "base_bdev": "Nvme0n1", 00:33:07.628 "total_data_clusters": 930, 00:33:07.628 "free_clusters": 0, 00:33:07.628 "block_size": 512, 00:33:07.628 "cluster_size": 1073741824 00:33:07.628 }, 00:33:07.628 { 00:33:07.628 "uuid": "21bdc84c-9d61-43b5-b6bc-f15918447c58", 00:33:07.628 "name": "lvs_n_0", 00:33:07.628 "base_bdev": "58e652de-b996-4daa-a5f7-22bafb573594", 00:33:07.628 "total_data_clusters": 237847, 00:33:07.628 "free_clusters": 237847, 00:33:07.628 "block_size": 512, 00:33:07.628 "cluster_size": 4194304 00:33:07.628 } 00:33:07.628 ]' 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="21bdc84c-9d61-43b5-b6bc-f15918447c58") .free_clusters' 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=237847 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="21bdc84c-9d61-43b5-b6bc-f15918447c58") .cluster_size' 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=951388 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 951388 00:33:07.628 951388 00:33:07.628 12:47:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:08.560 9b407473-5c56-4272-a34a-41991ecf00ce 00:33:08.560 12:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:08.818 12:47:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:09.077 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:09.335 12:47:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:09.593 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:09.593 fio-3.35 00:33:09.593 Starting 1 thread 00:33:12.118 00:33:12.118 test: (groupid=0, jobs=1): err= 0: pid=769324: Tue Nov 5 12:47:40 2024 00:33:12.118 read: IOPS=5784, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2009msec) 00:33:12.118 slat (nsec): min=1944, max=245437, avg=2578.61, stdev=3337.30 00:33:12.118 clat (usec): min=4857, max=20245, avg=12031.87, stdev=1133.69 00:33:12.118 lat (usec): min=4875, max=20248, avg=12034.45, stdev=1133.49 00:33:12.118 clat percentiles (usec): 00:33:12.118 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:33:12.118 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:33:12.118 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13829], 00:33:12.118 | 99.00th=[14484], 99.50th=[14746], 99.90th=[18220], 99.95th=[19530], 00:33:12.118 | 99.99th=[20317] 00:33:12.118 bw ( KiB/s): min=21788, max=23720, per=99.72%, avg=23075.00, stdev=872.86, samples=4 00:33:12.118 iops : min= 5447, max= 5930, avg=5768.75, stdev=218.22, samples=4 00:33:12.118 write: IOPS=5764, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:33:12.118 slat (usec): min=2, max=218, avg= 2.65, stdev= 2.36 00:33:12.118 clat (usec): min=2473, max=18055, avg=9930.67, stdev=906.94 00:33:12.118 lat (usec): min=2485, max=18057, avg=9933.32, stdev=906.82 00:33:12.118 clat percentiles (usec): 00:33:12.118 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:33:12.118 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:33:12.118 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:33:12.118 | 99.00th=[11863], 99.50th=[12125], 99.90th=[15008], 99.95th=[17695], 00:33:12.118 | 99.99th=[17957] 00:33:12.118 bw ( KiB/s): min=22810, max=23216, per=99.98%, avg=23052.50, stdev=174.40, samples=4 00:33:12.118 iops : min= 5702, max= 5804, avg=5763.00, stdev=43.83, samples=4 00:33:12.118 lat (msec) : 4=0.05%, 10=27.94%, 20=71.99%, 50=0.02% 00:33:12.118 cpu : usr=60.81%, sys=37.85%, ctx=106, majf=0, minf=36 00:33:12.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:33:12.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:12.118 issued rwts: total=11622,11580,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:12.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:12.118 00:33:12.118 Run status group 0 (all jobs): 00:33:12.118 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2009-2009msec 00:33:12.118 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:33:12.118 12:47:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:12.118 12:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:12.118 12:47:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:16.297 12:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:16.297 12:47:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:19.575 12:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:19.575 12:47:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.481 rmmod nvme_tcp 00:33:21.481 rmmod nvme_fabrics 00:33:21.481 rmmod nvme_keyring 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 766444 ']' 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 766444 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 766444 ']' 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 766444 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 766444 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 766444' 00:33:21.481 killing process with pid 766444 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 766444 00:33:21.481 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 766444 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.740 12:47:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.277 00:33:24.277 real 0m38.164s 00:33:24.277 user 2m26.607s 00:33:24.277 sys 0m7.186s 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.277 ************************************ 00:33:24.277 END TEST nvmf_fio_host 00:33:24.277 ************************************ 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.277 ************************************ 00:33:24.277 START TEST nvmf_failover 00:33:24.277 ************************************ 00:33:24.277 12:47:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:24.277 * Looking for test storage... 00:33:24.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:24.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.277 --rc genhtml_branch_coverage=1 00:33:24.277 --rc genhtml_function_coverage=1 00:33:24.277 --rc genhtml_legend=1 00:33:24.277 --rc geninfo_all_blocks=1 00:33:24.277 --rc geninfo_unexecuted_blocks=1 00:33:24.277 00:33:24.277 ' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:24.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.277 --rc genhtml_branch_coverage=1 00:33:24.277 --rc genhtml_function_coverage=1 00:33:24.277 --rc genhtml_legend=1 00:33:24.277 --rc geninfo_all_blocks=1 00:33:24.277 --rc geninfo_unexecuted_blocks=1 00:33:24.277 00:33:24.277 ' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:24.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.277 --rc genhtml_branch_coverage=1 00:33:24.277 --rc genhtml_function_coverage=1 00:33:24.277 --rc genhtml_legend=1 00:33:24.277 --rc geninfo_all_blocks=1 00:33:24.277 --rc geninfo_unexecuted_blocks=1 00:33:24.277 00:33:24.277 ' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:24.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:24.277 --rc genhtml_branch_coverage=1 00:33:24.277 --rc genhtml_function_coverage=1 00:33:24.277 --rc genhtml_legend=1 00:33:24.277 --rc geninfo_all_blocks=1 00:33:24.277 --rc geninfo_unexecuted_blocks=1 00:33:24.277 00:33:24.277 ' 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.277 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:24.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:24.278 12:47:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:26.180 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:26.180 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.180 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:26.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:26.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.181 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:33:26.439 00:33:26.439 --- 10.0.0.2 ping statistics --- 00:33:26.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.439 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:33:26.439 00:33:26.439 --- 10.0.0.1 ping statistics --- 00:33:26.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.439 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=772582 00:33:26.439 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 772582 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 772582 ']' 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:26.440 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:26.440 [2024-11-05 12:47:55.513808] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:33:26.440 [2024-11-05 12:47:55.513913] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.440 [2024-11-05 12:47:55.588781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:26.440 [2024-11-05 12:47:55.636085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.440 [2024-11-05 12:47:55.636137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.440 [2024-11-05 12:47:55.636166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.440 [2024-11-05 12:47:55.636178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.440 [2024-11-05 12:47:55.636188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.440 [2024-11-05 12:47:55.637635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.440 [2024-11-05 12:47:55.637699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.440 [2024-11-05 12:47:55.637703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.697 12:47:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:26.954 [2024-11-05 12:47:56.026986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.954 12:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:27.211 Malloc0 00:33:27.211 12:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.469 12:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:27.727 12:47:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.992 [2024-11-05 12:47:57.146959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.992 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:28.255 [2024-11-05 12:47:57.411763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:28.255 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:28.513 [2024-11-05 12:47:57.680615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=772875 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 772875 /var/tmp/bdevperf.sock 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 772875 ']' 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:28.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:28.513 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:28.771 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:28.771 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:33:28.771 12:47:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:29.335 NVMe0n1 00:33:29.335 12:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:29.592 00:33:29.592 12:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=773007 00:33:29.592 12:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:29.592 12:47:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:30.524 12:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:30.782 [2024-11-05 12:47:59.913759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.913996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 [2024-11-05 12:47:59.914490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2472800 is same with the state(6) to be set 00:33:30.782 12:47:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:34.060 12:48:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:34.317 00:33:34.317 12:48:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:34.574 [2024-11-05 12:48:03.721844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.721937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.721953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.721966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.721978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.574 [2024-11-05 12:48:03.722273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 [2024-11-05 12:48:03.722621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2473650 is same with the state(6) to be set 00:33:34.575 12:48:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:37.849 12:48:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.849 [2024-11-05 12:48:07.050824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.849 12:48:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:39.223 12:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:39.223 12:48:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 773007 00:33:45.799 { 00:33:45.799 "results": [ 00:33:45.799 { 00:33:45.799 "job": "NVMe0n1", 00:33:45.799 "core_mask": "0x1", 00:33:45.799 "workload": "verify", 00:33:45.799 "status": "finished", 00:33:45.799 "verify_range": { 00:33:45.799 "start": 0, 00:33:45.799 "length": 16384 00:33:45.799 }, 00:33:45.799 "queue_depth": 128, 00:33:45.799 "io_size": 4096, 00:33:45.799 "runtime": 15.007022, 00:33:45.799 "iops": 8529.340464750434, 00:33:45.799 "mibps": 33.31773619043138, 00:33:45.799 "io_failed": 8468, 00:33:45.799 "io_timeout": 0, 00:33:45.799 "avg_latency_us": 14048.03226960818, 00:33:45.799 "min_latency_us": 564.3377777777778, 00:33:45.799 "max_latency_us": 17670.447407407406 00:33:45.799 } 00:33:45.799 ], 00:33:45.799 "core_count": 1 00:33:45.800 } 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 772875 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 772875 ']' 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 772875 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 772875 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 772875' 00:33:45.800 killing process with pid 772875 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 772875 00:33:45.800 12:48:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 772875 00:33:45.800 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:45.800 [2024-11-05 12:47:57.748781] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:33:45.800 [2024-11-05 12:47:57.748888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772875 ] 00:33:45.800 [2024-11-05 12:47:57.818459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.800 [2024-11-05 12:47:57.867219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.800 Running I/O for 15 seconds... 00:33:45.800 8610.00 IOPS, 33.63 MiB/s [2024-11-05T11:48:15.038Z] [2024-11-05 12:47:59.915033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.800 [2024-11-05 12:47:59.915706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.915971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.800 [2024-11-05 12:47:59.915986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.800 [2024-11-05 12:47:59.916001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.916973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.916991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.801 [2024-11-05 12:47:59.917208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.801 [2024-11-05 12:47:59.917222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.917692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.917972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.802 [2024-11-05 12:47:59.917986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.802 [2024-11-05 12:47:59.918420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.802 [2024-11-05 12:47:59.918436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.918969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.803 [2024-11-05 12:47:59.918983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.803 [2024-11-05 12:47:59.919033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:33:45.803 [2024-11-05 12:47:59.919048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:45.803 [2024-11-05 12:47:59.919086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.803 [2024-11-05 12:47:59.919098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:33:45.803 [2024-11-05 12:47:59.919111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919197] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:45.803 [2024-11-05 12:47:59.919252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.803 [2024-11-05 12:47:59.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.803 [2024-11-05 12:47:59.919301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.803 [2024-11-05 12:47:59.919329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.803 [2024-11-05 12:47:59.919357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:47:59.919371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:45.803 [2024-11-05 12:47:59.922644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:45.803 [2024-11-05 12:47:59.922681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6890 (9): Bad file descriptor 00:33:45.803 [2024-11-05 12:47:59.945207] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:45.803 8459.50 IOPS, 33.04 MiB/s [2024-11-05T11:48:15.041Z] 8553.67 IOPS, 33.41 MiB/s [2024-11-05T11:48:15.041Z] 8557.75 IOPS, 33.43 MiB/s [2024-11-05T11:48:15.041Z] [2024-11-05 12:48:03.724448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.803 [2024-11-05 12:48:03.724813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.803 [2024-11-05 12:48:03.724829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.804 [2024-11-05 12:48:03.724843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.724880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.804 [2024-11-05 12:48:03.724899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.724918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.804 [2024-11-05 12:48:03.724933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.724948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.804 [2024-11-05 12:48:03.724966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.724983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.804 [2024-11-05 12:48:03.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.804 [2024-11-05 12:48:03.725290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.804 [2024-11-05 12:48:03.725871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.804 [2024-11-05 12:48:03.725888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.725910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.725925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.725939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.725955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.725969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.725984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.726979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.726995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.727009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.727025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.727040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.727058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.727072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.727088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.727102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.805 [2024-11-05 12:48:03.727117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.805 [2024-11-05 12:48:03.727132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.806 [2024-11-05 12:48:03.727497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.727978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.727998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.806 [2024-11-05 12:48:03.728342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.806 [2024-11-05 12:48:03.728357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:03.728386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.807 [2024-11-05 12:48:03.728436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87080 len:8 PRP1 0x0 PRP2 0x0 00:33:45.807 [2024-11-05 12:48:03.728450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:45.807 [2024-11-05 12:48:03.728481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.807 [2024-11-05 12:48:03.728492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87088 len:8 PRP1 0x0 PRP2 0x0 00:33:45.807 [2024-11-05 12:48:03.728506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:45.807 [2024-11-05 12:48:03.728531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.807 [2024-11-05 12:48:03.728542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87096 len:8 PRP1 0x0 PRP2 0x0 00:33:45.807 [2024-11-05 12:48:03.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:45.807 [2024-11-05 12:48:03.728585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.807 [2024-11-05 12:48:03.728596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87104 len:8 PRP1 0x0 PRP2 0x0 00:33:45.807 [2024-11-05 12:48:03.728615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728684] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:45.807 [2024-11-05 12:48:03.728723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.807 [2024-11-05 12:48:03.728742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.807 [2024-11-05 12:48:03.728771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.807 [2024-11-05 12:48:03.728798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.807 [2024-11-05 12:48:03.728828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:03.728842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:45.807 [2024-11-05 12:48:03.732104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:45.807 [2024-11-05 12:48:03.732146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6890 (9): Bad file descriptor 00:33:45.807 8490.00 IOPS, 33.16 MiB/s [2024-11-05T11:48:15.045Z] [2024-11-05 12:48:03.884440] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:45.807 8362.17 IOPS, 32.66 MiB/s [2024-11-05T11:48:15.045Z] 8405.57 IOPS, 32.83 MiB/s [2024-11-05T11:48:15.045Z] 8439.88 IOPS, 32.97 MiB/s [2024-11-05T11:48:15.045Z] 8459.44 IOPS, 33.04 MiB/s [2024-11-05T11:48:15.045Z] [2024-11-05 12:48:08.350676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.350745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.350775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.350792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.350825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.350839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.350889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.350917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.350951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.350966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.350981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.350995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.807 [2024-11-05 12:48:08.351558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.807 [2024-11-05 12:48:08.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.351984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.351998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.808 [2024-11-05 12:48:08.352309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.808 [2024-11-05 12:48:08.352716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.808 [2024-11-05 12:48:08.352731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.352744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.809 [2024-11-05 12:48:08.352772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.352801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.352830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.352898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.352945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.352975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.352991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.809 [2024-11-05 12:48:08.353923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.809 [2024-11-05 12:48:08.353938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.353955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.353970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.353993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.810 [2024-11-05 12:48:08.354577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.810 [2024-11-05 12:48:08.354782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144eb50 is same with the state(6) to be set 00:33:45.810 [2024-11-05 12:48:08.354813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:45.810 [2024-11-05 12:48:08.354824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:45.810 [2024-11-05 12:48:08.354836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51768 len:8 PRP1 0x0 PRP2 0x0 00:33:45.810 [2024-11-05 12:48:08.354856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.354946] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:45.810 [2024-11-05 12:48:08.354989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.810 [2024-11-05 12:48:08.355009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.355025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.810 [2024-11-05 12:48:08.355044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.355058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.810 [2024-11-05 12:48:08.355072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.355091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:45.810 [2024-11-05 12:48:08.355104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.810 [2024-11-05 12:48:08.355118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:45.810 [2024-11-05 12:48:08.358486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:45.810 [2024-11-05 12:48:08.358524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f6890 (9): Bad file descriptor 00:33:45.810 [2024-11-05 12:48:08.384125] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:45.810 8442.50 IOPS, 32.98 MiB/s [2024-11-05T11:48:15.048Z] 8472.18 IOPS, 33.09 MiB/s [2024-11-05T11:48:15.048Z] 8476.42 IOPS, 33.11 MiB/s [2024-11-05T11:48:15.048Z] 8500.31 IOPS, 33.20 MiB/s [2024-11-05T11:48:15.048Z] 8516.00 IOPS, 33.27 MiB/s 00:33:45.810 Latency(us) 00:33:45.810 [2024-11-05T11:48:15.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.810 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:45.810 Verification LBA range: start 0x0 length 0x4000 00:33:45.810 NVMe0n1 : 15.01 8529.34 33.32 564.27 0.00 14048.03 564.34 17670.45 00:33:45.810 [2024-11-05T11:48:15.048Z] =================================================================================================================== 00:33:45.810 [2024-11-05T11:48:15.048Z] Total : 8529.34 33.32 564.27 0.00 14048.03 564.34 17670.45 00:33:45.810 Received shutdown signal, test time was about 15.000000 seconds 00:33:45.810 00:33:45.810 Latency(us) 00:33:45.810 [2024-11-05T11:48:15.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.810 [2024-11-05T11:48:15.049Z] =================================================================================================================== 00:33:45.811 [2024-11-05T11:48:15.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=775461 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 775461 /var/tmp/bdevperf.sock 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 775461 ']' 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:45.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:45.811 [2024-11-05 12:48:14.564887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:45.811 [2024-11-05 12:48:14.833620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:45.811 12:48:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:46.068 NVMe0n1 00:33:46.068 12:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:46.326 00:33:46.326 12:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:46.891 00:33:46.891 12:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:46.891 12:48:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:47.149 12:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:47.406 12:48:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:50.692 12:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:50.692 12:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:50.692 12:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=776125 00:33:50.692 12:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:50.692 12:48:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 776125 00:33:52.067 { 00:33:52.067 "results": [ 00:33:52.067 { 00:33:52.067 "job": "NVMe0n1", 00:33:52.067 "core_mask": "0x1", 00:33:52.067 "workload": "verify", 00:33:52.067 "status": "finished", 00:33:52.067 "verify_range": { 00:33:52.067 "start": 0, 00:33:52.067 "length": 16384 00:33:52.067 }, 00:33:52.067 "queue_depth": 128, 00:33:52.067 "io_size": 4096, 00:33:52.067 "runtime": 1.011073, 00:33:52.067 "iops": 8384.162172266493, 00:33:52.067 "mibps": 32.75063348541599, 00:33:52.067 "io_failed": 0, 00:33:52.067 "io_timeout": 0, 00:33:52.067 "avg_latency_us": 15167.491632871515, 00:33:52.067 "min_latency_us": 1517.037037037037, 00:33:52.067 "max_latency_us": 15146.097777777777 00:33:52.067 } 00:33:52.067 ], 00:33:52.067 "core_count": 1 00:33:52.067 } 00:33:52.067 12:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:52.067 [2024-11-05 12:48:14.084597] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:33:52.067 [2024-11-05 12:48:14.084682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775461 ] 00:33:52.067 [2024-11-05 12:48:14.153597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.067 [2024-11-05 12:48:14.197691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.067 [2024-11-05 12:48:16.525931] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:52.067 [2024-11-05 12:48:16.526009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.067 [2024-11-05 12:48:16.526031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.067 [2024-11-05 12:48:16.526046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.067 [2024-11-05 12:48:16.526060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.067 [2024-11-05 12:48:16.526073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.067 [2024-11-05 12:48:16.526086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.067 [2024-11-05 12:48:16.526100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.067 [2024-11-05 12:48:16.526112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.067 [2024-11-05 12:48:16.526125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:52.067 [2024-11-05 12:48:16.526184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:52.067 [2024-11-05 12:48:16.526214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5b890 (9): Bad file descriptor 00:33:52.067 [2024-11-05 12:48:16.579301] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:52.067 Running I/O for 1 seconds... 00:33:52.067 8293.00 IOPS, 32.39 MiB/s 00:33:52.067 Latency(us) 00:33:52.067 [2024-11-05T11:48:21.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.067 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:52.067 Verification LBA range: start 0x0 length 0x4000 00:33:52.067 NVMe0n1 : 1.01 8384.16 32.75 0.00 0.00 15167.49 1517.04 15146.10 00:33:52.067 [2024-11-05T11:48:21.305Z] =================================================================================================================== 00:33:52.067 [2024-11-05T11:48:21.305Z] Total : 8384.16 32.75 0.00 0.00 15167.49 1517.04 15146.10 00:33:52.067 12:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:52.067 12:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:52.067 12:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:52.325 12:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:52.325 12:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:52.582 12:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:52.840 12:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:56.124 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:56.124 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 775461 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 775461 ']' 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 775461 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 775461 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 775461' 00:33:56.383 killing process with pid 775461 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 775461 00:33:56.383 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 775461 00:33:56.642 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:56.642 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.902 rmmod nvme_tcp 00:33:56.902 rmmod nvme_fabrics 00:33:56.902 rmmod nvme_keyring 00:33:56.902 12:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 772582 ']' 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 772582 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 772582 ']' 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 772582 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 772582 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 772582' 00:33:56.902 killing process with pid 772582 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 772582 00:33:56.902 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 772582 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.162 12:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:59.698 00:33:59.698 real 0m35.354s 00:33:59.698 user 2m4.629s 00:33:59.698 sys 0m5.850s 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:59.698 ************************************ 00:33:59.698 END TEST nvmf_failover 00:33:59.698 ************************************ 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.698 ************************************ 00:33:59.698 START TEST nvmf_host_discovery 00:33:59.698 ************************************ 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:59.698 * Looking for test storage... 00:33:59.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.698 --rc genhtml_branch_coverage=1 00:33:59.698 --rc genhtml_function_coverage=1 00:33:59.698 --rc genhtml_legend=1 00:33:59.698 --rc geninfo_all_blocks=1 00:33:59.698 --rc geninfo_unexecuted_blocks=1 00:33:59.698 00:33:59.698 ' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.698 --rc genhtml_branch_coverage=1 00:33:59.698 --rc genhtml_function_coverage=1 00:33:59.698 --rc genhtml_legend=1 00:33:59.698 --rc geninfo_all_blocks=1 00:33:59.698 --rc geninfo_unexecuted_blocks=1 00:33:59.698 00:33:59.698 ' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.698 --rc genhtml_branch_coverage=1 00:33:59.698 --rc genhtml_function_coverage=1 00:33:59.698 --rc genhtml_legend=1 00:33:59.698 --rc geninfo_all_blocks=1 00:33:59.698 --rc geninfo_unexecuted_blocks=1 00:33:59.698 00:33:59.698 ' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:59.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:59.698 --rc genhtml_branch_coverage=1 00:33:59.698 --rc genhtml_function_coverage=1 00:33:59.698 --rc genhtml_legend=1 00:33:59.698 --rc geninfo_all_blocks=1 00:33:59.698 --rc geninfo_unexecuted_blocks=1 00:33:59.698 00:33:59.698 ' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.698 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:59.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:59.699 12:48:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:01.603 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.603 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:01.604 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:01.604 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:01.604 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:01.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:34:01.604 00:34:01.604 --- 10.0.0.2 ping statistics --- 00:34:01.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.604 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:34:01.604 00:34:01.604 --- 10.0.0.1 ping statistics --- 00:34:01.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.604 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=778739 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 778739 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 778739 ']' 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:01.604 12:48:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.604 [2024-11-05 12:48:30.810889] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:34:01.604 [2024-11-05 12:48:30.810961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.863 [2024-11-05 12:48:30.887874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.863 [2024-11-05 12:48:30.934185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.863 [2024-11-05 12:48:30.934252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.863 [2024-11-05 12:48:30.934281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.863 [2024-11-05 12:48:30.934293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.863 [2024-11-05 12:48:30.934303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.863 [2024-11-05 12:48:30.934939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.863 [2024-11-05 12:48:31.073293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.863 [2024-11-05 12:48:31.081494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.863 null0 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:01.863 null1 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.863 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=778881 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 778881 /tmp/host.sock 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 778881 ']' 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:02.123 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:02.123 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.123 [2024-11-05 12:48:31.154031] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:34:02.123 [2024-11-05 12:48:31.154114] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778881 ] 00:34:02.123 [2024-11-05 12:48:31.218838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.123 [2024-11-05 12:48:31.264316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.381 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:02.381 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:34:02.381 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:02.382 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.642 [2024-11-05 12:48:31.651046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:34:02.642 12:48:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:34:03.208 [2024-11-05 12:48:32.430415] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:03.208 [2024-11-05 12:48:32.430439] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:03.208 [2024-11-05 12:48:32.430461] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:03.466 [2024-11-05 12:48:32.516752] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:03.724 [2024-11-05 12:48:32.733015] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:03.724 [2024-11-05 12:48:32.733908] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b6740:1 started. 00:34:03.724 [2024-11-05 12:48:32.735572] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:03.724 [2024-11-05 12:48:32.735592] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:03.724 [2024-11-05 12:48:32.739228] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b6740 was disconnected and freed. delete nvme_qpair. 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:03.724 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.725 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:03.985 12:48:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:03.985 [2024-11-05 12:48:32.995647] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x21b6940:1 started. 00:34:03.985 [2024-11-05 12:48:32.999786] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x21b6940 was disconnected and freed. delete nvme_qpair. 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 [2024-11-05 12:48:33.071356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:03.985 [2024-11-05 12:48:33.072323] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:03.985 [2024-11-05 12:48:33.072367] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:03.985 [2024-11-05 12:48:33.158050] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:03.985 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:03.986 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:03.986 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.986 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:03.986 12:48:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:34:04.272 [2024-11-05 12:48:33.257138] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:04.272 [2024-11-05 12:48:33.257205] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:04.272 [2024-11-05 12:48:33.257221] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:04.272 [2024-11-05 12:48:33.257229] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:05.263 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:05.263 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:05.263 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.264 [2024-11-05 12:48:34.287075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.264 [2024-11-05 12:48:34.287125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.264 [2024-11-05 12:48:34.287144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.264 [2024-11-05 12:48:34.287158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.264 [2024-11-05 12:48:34.287171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.264 [2024-11-05 12:48:34.287184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.264 [2024-11-05 12:48:34.287198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:05.264 [2024-11-05 12:48:34.287211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:05.264 [2024-11-05 12:48:34.287224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.264 [2024-11-05 12:48:34.287306] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:05.264 [2024-11-05 12:48:34.287336] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:05.264 [2024-11-05 12:48:34.297054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.264 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.264 [2024-11-05 12:48:34.307096] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.264 [2024-11-05 12:48:34.307122] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.264 [2024-11-05 12:48:34.307148] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.264 [2024-11-05 12:48:34.307157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.264 [2024-11-05 12:48:34.307189] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.264 [2024-11-05 12:48:34.307381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.264 [2024-11-05 12:48:34.307410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.264 [2024-11-05 12:48:34.307427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.264 [2024-11-05 12:48:34.307450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.264 [2024-11-05 12:48:34.307483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.264 [2024-11-05 12:48:34.307500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.264 [2024-11-05 12:48:34.307515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.264 [2024-11-05 12:48:34.307527] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.264 [2024-11-05 12:48:34.307538] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.264 [2024-11-05 12:48:34.307547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.264 [2024-11-05 12:48:34.317222] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.264 [2024-11-05 12:48:34.317258] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.264 [2024-11-05 12:48:34.317267] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.264 [2024-11-05 12:48:34.317274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.264 [2024-11-05 12:48:34.317312] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.264 [2024-11-05 12:48:34.317502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.264 [2024-11-05 12:48:34.317530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.264 [2024-11-05 12:48:34.317547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.264 [2024-11-05 12:48:34.317568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.264 [2024-11-05 12:48:34.317600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.264 [2024-11-05 12:48:34.317618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.264 [2024-11-05 12:48:34.317632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.264 [2024-11-05 12:48:34.317643] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.264 [2024-11-05 12:48:34.317657] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.264 [2024-11-05 12:48:34.317665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.264 [2024-11-05 12:48:34.327346] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.264 [2024-11-05 12:48:34.327367] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.264 [2024-11-05 12:48:34.327375] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.264 [2024-11-05 12:48:34.327382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.264 [2024-11-05 12:48:34.327419] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.264 [2024-11-05 12:48:34.327529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.264 [2024-11-05 12:48:34.327571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.264 [2024-11-05 12:48:34.327588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.264 [2024-11-05 12:48:34.327609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.265 [2024-11-05 12:48:34.327641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.265 [2024-11-05 12:48:34.327658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.265 [2024-11-05 12:48:34.327671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.265 [2024-11-05 12:48:34.327683] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.265 [2024-11-05 12:48:34.327692] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.265 [2024-11-05 12:48:34.327700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:05.265 [2024-11-05 12:48:34.337454] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.265 [2024-11-05 12:48:34.337476] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.265 [2024-11-05 12:48:34.337485] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.337492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.265 [2024-11-05 12:48:34.337531] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.337657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.265 [2024-11-05 12:48:34.337685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.265 [2024-11-05 12:48:34.337710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.265 [2024-11-05 12:48:34.337733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.265 [2024-11-05 12:48:34.337766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.265 [2024-11-05 12:48:34.337783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.265 [2024-11-05 12:48:34.337796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.265 [2024-11-05 12:48:34.337807] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.265 [2024-11-05 12:48:34.337816] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.265 [2024-11-05 12:48:34.337823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:05.265 [2024-11-05 12:48:34.347565] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.265 [2024-11-05 12:48:34.347588] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.265 [2024-11-05 12:48:34.347596] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.347603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.265 [2024-11-05 12:48:34.347642] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.347810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.265 [2024-11-05 12:48:34.347838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.265 [2024-11-05 12:48:34.347855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.265 [2024-11-05 12:48:34.347888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.265 [2024-11-05 12:48:34.347922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.265 [2024-11-05 12:48:34.347940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.265 [2024-11-05 12:48:34.347954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.265 [2024-11-05 12:48:34.347965] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.265 [2024-11-05 12:48:34.347974] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.265 [2024-11-05 12:48:34.347981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.265 [2024-11-05 12:48:34.357676] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.265 [2024-11-05 12:48:34.357697] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.265 [2024-11-05 12:48:34.357706] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.357713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.265 [2024-11-05 12:48:34.357736] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.357949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.265 [2024-11-05 12:48:34.357977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.265 [2024-11-05 12:48:34.357993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.265 [2024-11-05 12:48:34.358014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.265 [2024-11-05 12:48:34.358046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.265 [2024-11-05 12:48:34.358063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.265 [2024-11-05 12:48:34.358077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.265 [2024-11-05 12:48:34.358088] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.265 [2024-11-05 12:48:34.358097] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.265 [2024-11-05 12:48:34.358104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.265 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.265 [2024-11-05 12:48:34.367770] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.265 [2024-11-05 12:48:34.367790] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.265 [2024-11-05 12:48:34.367799] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.367806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.265 [2024-11-05 12:48:34.367829] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.367956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.265 [2024-11-05 12:48:34.367984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.265 [2024-11-05 12:48:34.368000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.265 [2024-11-05 12:48:34.368021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.265 [2024-11-05 12:48:34.368041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.265 [2024-11-05 12:48:34.368054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.265 [2024-11-05 12:48:34.368067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.265 [2024-11-05 12:48:34.368079] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.265 [2024-11-05 12:48:34.368088] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.265 [2024-11-05 12:48:34.368100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.265 [2024-11-05 12:48:34.377872] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.265 [2024-11-05 12:48:34.377896] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.265 [2024-11-05 12:48:34.377920] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.377928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.265 [2024-11-05 12:48:34.377955] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.265 [2024-11-05 12:48:34.378073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.265 [2024-11-05 12:48:34.378101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.265 [2024-11-05 12:48:34.378116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.265 [2024-11-05 12:48:34.378138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.265 [2024-11-05 12:48:34.378158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.265 [2024-11-05 12:48:34.378171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.266 [2024-11-05 12:48:34.378185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.266 [2024-11-05 12:48:34.378196] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.266 [2024-11-05 12:48:34.378206] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.266 [2024-11-05 12:48:34.378214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:05.266 [2024-11-05 12:48:34.387990] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.266 [2024-11-05 12:48:34.388018] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.266 [2024-11-05 12:48:34.388029] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.266 [2024-11-05 12:48:34.388037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.266 [2024-11-05 12:48:34.388062] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.266 [2024-11-05 12:48:34.388207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.266 [2024-11-05 12:48:34.388234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.266 [2024-11-05 12:48:34.388249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.266 [2024-11-05 12:48:34.388271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.266 [2024-11-05 12:48:34.388304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.266 [2024-11-05 12:48:34.388322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.266 [2024-11-05 12:48:34.388336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.266 [2024-11-05 12:48:34.388347] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.266 [2024-11-05 12:48:34.388356] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.266 [2024-11-05 12:48:34.388363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.266 [2024-11-05 12:48:34.398097] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.266 [2024-11-05 12:48:34.398120] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.266 [2024-11-05 12:48:34.398129] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.266 [2024-11-05 12:48:34.398137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.266 [2024-11-05 12:48:34.398176] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.266 [2024-11-05 12:48:34.398340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.266 [2024-11-05 12:48:34.398367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.266 [2024-11-05 12:48:34.398384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.266 [2024-11-05 12:48:34.398405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.266 [2024-11-05 12:48:34.398436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.266 [2024-11-05 12:48:34.398453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.266 [2024-11-05 12:48:34.398467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.266 [2024-11-05 12:48:34.398478] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.266 [2024-11-05 12:48:34.398487] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.266 [2024-11-05 12:48:34.398495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.266 [2024-11-05 12:48:34.408210] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:05.266 [2024-11-05 12:48:34.408231] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:05.266 [2024-11-05 12:48:34.408239] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:05.266 [2024-11-05 12:48:34.408246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:05.266 [2024-11-05 12:48:34.408284] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:05.266 [2024-11-05 12:48:34.408469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.266 [2024-11-05 12:48:34.408496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188900 with addr=10.0.0.2, port=4420 00:34:05.266 [2024-11-05 12:48:34.408512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2188900 is same with the state(6) to be set 00:34:05.266 [2024-11-05 12:48:34.408533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2188900 (9): Bad file descriptor 00:34:05.266 [2024-11-05 12:48:34.408566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:05.266 [2024-11-05 12:48:34.408584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:05.266 [2024-11-05 12:48:34.408598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:05.266 [2024-11-05 12:48:34.408609] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:05.266 [2024-11-05 12:48:34.408618] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:05.266 [2024-11-05 12:48:34.408626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:05.266 [2024-11-05 12:48:34.414581] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:05.266 [2024-11-05 12:48:34.414609] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:34:05.266 12:48:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:06.203 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.462 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.463 12:48:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 [2024-11-05 12:48:36.671704] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:07.843 [2024-11-05 12:48:36.671741] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:07.843 [2024-11-05 12:48:36.671763] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:07.843 [2024-11-05 12:48:36.759057] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:07.843 [2024-11-05 12:48:36.824693] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:07.843 [2024-11-05 12:48:36.825493] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2183f40:1 started. 00:34:07.843 [2024-11-05 12:48:36.827652] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:07.843 [2024-11-05 12:48:36.827690] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.843 [2024-11-05 12:48:36.830272] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2183f40 was disconnected and freed. delete nvme_qpair. 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 request: 00:34:07.843 { 00:34:07.843 "name": "nvme", 00:34:07.843 "trtype": "tcp", 00:34:07.843 "traddr": "10.0.0.2", 00:34:07.843 "adrfam": "ipv4", 00:34:07.843 "trsvcid": "8009", 00:34:07.843 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:07.843 "wait_for_attach": true, 00:34:07.843 "method": "bdev_nvme_start_discovery", 00:34:07.843 "req_id": 1 00:34:07.843 } 00:34:07.843 Got JSON-RPC error response 00:34:07.843 response: 00:34:07.843 { 00:34:07.843 "code": -17, 00:34:07.843 "message": "File exists" 00:34:07.843 } 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 request: 00:34:07.843 { 00:34:07.843 "name": "nvme_second", 00:34:07.843 "trtype": "tcp", 00:34:07.843 "traddr": "10.0.0.2", 00:34:07.843 "adrfam": "ipv4", 00:34:07.843 "trsvcid": "8009", 00:34:07.843 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:07.843 "wait_for_attach": true, 00:34:07.843 "method": "bdev_nvme_start_discovery", 00:34:07.843 "req_id": 1 00:34:07.843 } 00:34:07.843 Got JSON-RPC error response 00:34:07.843 response: 00:34:07.843 { 00:34:07.843 "code": -17, 00:34:07.843 "message": "File exists" 00:34:07.843 } 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:07.843 12:48:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.843 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.844 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.844 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.844 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:07.844 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.844 12:48:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:09.222 [2024-11-05 12:48:38.043115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.222 [2024-11-05 12:48:38.043201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188250 with addr=10.0.0.2, port=8010 00:34:09.222 [2024-11-05 12:48:38.043248] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:09.222 [2024-11-05 12:48:38.043262] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:09.222 [2024-11-05 12:48:38.043275] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:10.157 [2024-11-05 12:48:39.045541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.157 [2024-11-05 12:48:39.045576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2188250 with addr=10.0.0.2, port=8010 00:34:10.157 [2024-11-05 12:48:39.045596] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:10.157 [2024-11-05 12:48:39.045609] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:10.157 [2024-11-05 12:48:39.045620] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:11.092 [2024-11-05 12:48:40.047758] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:11.092 request: 00:34:11.092 { 00:34:11.092 "name": "nvme_second", 00:34:11.092 "trtype": "tcp", 00:34:11.092 "traddr": "10.0.0.2", 00:34:11.092 "adrfam": "ipv4", 00:34:11.092 "trsvcid": "8010", 00:34:11.092 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:11.092 "wait_for_attach": false, 00:34:11.092 "attach_timeout_ms": 3000, 00:34:11.092 "method": "bdev_nvme_start_discovery", 00:34:11.092 "req_id": 1 00:34:11.092 } 00:34:11.092 Got JSON-RPC error response 00:34:11.092 response: 00:34:11.092 { 00:34:11.092 "code": -110, 00:34:11.092 "message": "Connection timed out" 00:34:11.092 } 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 778881 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:11.092 rmmod nvme_tcp 00:34:11.092 rmmod nvme_fabrics 00:34:11.092 rmmod nvme_keyring 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 778739 ']' 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 778739 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 778739 ']' 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 778739 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 778739 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 778739' 00:34:11.092 killing process with pid 778739 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 778739 00:34:11.092 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 778739 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.353 12:48:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.257 00:34:13.257 real 0m14.093s 00:34:13.257 user 0m20.746s 00:34:13.257 sys 0m2.833s 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:13.257 ************************************ 00:34:13.257 END TEST nvmf_host_discovery 00:34:13.257 ************************************ 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:13.257 12:48:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.516 ************************************ 00:34:13.517 START TEST nvmf_host_multipath_status 00:34:13.517 ************************************ 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:13.517 * Looking for test storage... 00:34:13.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:13.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.517 --rc genhtml_branch_coverage=1 00:34:13.517 --rc genhtml_function_coverage=1 00:34:13.517 --rc genhtml_legend=1 00:34:13.517 --rc geninfo_all_blocks=1 00:34:13.517 --rc geninfo_unexecuted_blocks=1 00:34:13.517 00:34:13.517 ' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:13.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.517 --rc genhtml_branch_coverage=1 00:34:13.517 --rc genhtml_function_coverage=1 00:34:13.517 --rc genhtml_legend=1 00:34:13.517 --rc geninfo_all_blocks=1 00:34:13.517 --rc geninfo_unexecuted_blocks=1 00:34:13.517 00:34:13.517 ' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:13.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.517 --rc genhtml_branch_coverage=1 00:34:13.517 --rc genhtml_function_coverage=1 00:34:13.517 --rc genhtml_legend=1 00:34:13.517 --rc geninfo_all_blocks=1 00:34:13.517 --rc geninfo_unexecuted_blocks=1 00:34:13.517 00:34:13.517 ' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:13.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.517 --rc genhtml_branch_coverage=1 00:34:13.517 --rc genhtml_function_coverage=1 00:34:13.517 --rc genhtml_legend=1 00:34:13.517 --rc geninfo_all_blocks=1 00:34:13.517 --rc geninfo_unexecuted_blocks=1 00:34:13.517 00:34:13.517 ' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.517 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:13.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.518 12:48:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:16.052 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.052 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:16.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:16.053 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:16.053 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:16.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:16.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:34:16.053 00:34:16.053 --- 10.0.0.2 ping statistics --- 00:34:16.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.053 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:16.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:16.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:34:16.053 00:34:16.053 --- 10.0.0.1 ping statistics --- 00:34:16.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:16.053 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:16.053 12:48:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=782057 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 782057 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 782057 ']' 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:16.053 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:16.053 [2024-11-05 12:48:45.064088] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:34:16.053 [2024-11-05 12:48:45.064184] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.053 [2024-11-05 12:48:45.140127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:16.053 [2024-11-05 12:48:45.185321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.053 [2024-11-05 12:48:45.185376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.053 [2024-11-05 12:48:45.185400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.053 [2024-11-05 12:48:45.185410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.053 [2024-11-05 12:48:45.185420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.053 [2024-11-05 12:48:45.189880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.053 [2024-11-05 12:48:45.189892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=782057 00:34:16.312 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:16.571 [2024-11-05 12:48:45.566111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.571 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:16.828 Malloc0 00:34:16.828 12:48:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:17.086 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:17.344 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.603 [2024-11-05 12:48:46.664220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:17.603 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:17.861 [2024-11-05 12:48:46.940961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=782339 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 782339 /var/tmp/bdevperf.sock 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 782339 ']' 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:17.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:17.861 12:48:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:18.119 12:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:18.119 12:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:34:18.119 12:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:18.377 12:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:18.944 Nvme0n1 00:34:18.944 12:48:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:19.202 Nvme0n1 00:34:19.202 12:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:19.202 12:48:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:21.744 12:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:21.744 12:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:21.744 12:48:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:22.003 12:48:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:22.936 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:22.936 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:22.936 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.936 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:23.194 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.194 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:23.194 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.194 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:23.452 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.452 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:23.452 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.452 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:23.711 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.711 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:23.711 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.711 12:48:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:23.969 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.969 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:23.969 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.969 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:24.228 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.228 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:24.228 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:24.228 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:24.486 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:24.486 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:24.486 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:25.052 12:48:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:25.052 12:48:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.425 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:26.683 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.683 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:26.683 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.683 12:48:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:26.941 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:26.941 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:26.941 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:26.941 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:27.199 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.199 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:27.199 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.199 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:27.457 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.457 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:27.457 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:27.457 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:27.715 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:27.715 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:27.715 12:48:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:28.281 12:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:28.281 12:48:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.653 12:48:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:29.912 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:29.912 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:29.912 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:29.912 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:30.170 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.170 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:30.170 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.170 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:30.428 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.428 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:30.428 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.428 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:30.685 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.685 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:30.685 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:30.685 12:48:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:30.942 12:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:30.942 12:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:30.942 12:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:31.506 12:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:31.506 12:49:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:32.913 12:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:32.913 12:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:32.913 12:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.913 12:49:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:32.913 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:32.913 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:32.913 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:32.913 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:33.196 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:33.196 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:33.196 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.196 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:33.454 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.454 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:33.454 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.454 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:33.712 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.712 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:33.712 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.712 12:49:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:33.970 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:33.970 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:33.970 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:33.970 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:34.228 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:34.228 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:34.228 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:34.486 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:34.743 12:49:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:36.113 12:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:36.113 12:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:36.113 12:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.113 12:49:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:36.113 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:36.113 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:36.113 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.113 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:36.371 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:36.371 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:36.371 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.371 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:36.628 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.629 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:36.629 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.629 12:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:36.886 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:36.886 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:36.886 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:36.886 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:37.144 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.144 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:37.144 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:37.144 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:37.401 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:37.401 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:37.401 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:37.659 12:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:37.917 12:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.289 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:39.547 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.547 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:39.547 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.547 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:39.805 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:39.805 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:39.805 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:39.805 12:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:40.064 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.064 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:40.064 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.064 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:40.323 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:40.323 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:40.323 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:40.323 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:40.596 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:40.596 12:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:40.857 12:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:40.857 12:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:41.116 12:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:41.681 12:49:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:42.616 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:42.616 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:42.616 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.616 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:42.874 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:42.874 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:42.874 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:42.874 12:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:43.132 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.132 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:43.132 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.132 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:43.390 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.390 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:43.390 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.390 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:43.649 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.649 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:43.649 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.649 12:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:43.907 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:43.907 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:43.907 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:43.907 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:44.165 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.165 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:44.165 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:44.424 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:44.682 12:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:45.617 12:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:45.617 12:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:45.617 12:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.617 12:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.184 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.443 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.443 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.443 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.443 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.010 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.010 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.010 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.010 12:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:47.010 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.010 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:47.010 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.010 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:47.576 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.576 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:47.576 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:47.576 12:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:47.835 12:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.210 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:49.469 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.469 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:49.469 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.469 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:49.727 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.727 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:49.727 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.727 12:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:49.985 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.985 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:49.985 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.985 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:50.244 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.244 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.244 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.244 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:50.502 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.502 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:50.502 12:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:51.068 12:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:51.068 12:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.442 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.700 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.700 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.700 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.700 12:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.958 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.958 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.958 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.958 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.216 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.216 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:53.216 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.216 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.474 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.474 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:53.474 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.474 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 782339 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 782339 ']' 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 782339 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:53.732 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 782339 00:34:53.993 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:34:53.993 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:34:53.993 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 782339' 00:34:53.993 killing process with pid 782339 00:34:53.993 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 782339 00:34:53.993 12:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 782339 00:34:53.993 { 00:34:53.993 "results": [ 00:34:53.993 { 00:34:53.993 "job": "Nvme0n1", 00:34:53.993 "core_mask": "0x4", 00:34:53.993 "workload": "verify", 00:34:53.993 "status": "terminated", 00:34:53.993 "verify_range": { 00:34:53.993 "start": 0, 00:34:53.993 "length": 16384 00:34:53.993 }, 00:34:53.993 "queue_depth": 128, 00:34:53.993 "io_size": 4096, 00:34:53.993 "runtime": 34.416974, 00:34:53.993 "iops": 8026.737039694425, 00:34:53.993 "mibps": 31.35444156130635, 00:34:53.993 "io_failed": 0, 00:34:53.993 "io_timeout": 0, 00:34:53.993 "avg_latency_us": 15919.267189552578, 00:34:53.993 "min_latency_us": 442.9748148148148, 00:34:53.993 "max_latency_us": 4026531.84 00:34:53.993 } 00:34:53.993 ], 00:34:53.993 "core_count": 1 00:34:53.993 } 00:34:53.993 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 782339 00:34:53.993 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:53.993 [2024-11-05 12:48:47.007004] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:34:53.993 [2024-11-05 12:48:47.007110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782339 ] 00:34:53.993 [2024-11-05 12:48:47.076077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.993 [2024-11-05 12:48:47.123510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.993 Running I/O for 90 seconds... 00:34:53.993 8479.00 IOPS, 33.12 MiB/s [2024-11-05T11:49:23.231Z] 8460.50 IOPS, 33.05 MiB/s [2024-11-05T11:49:23.231Z] 8561.33 IOPS, 33.44 MiB/s [2024-11-05T11:49:23.231Z] 8575.75 IOPS, 33.50 MiB/s [2024-11-05T11:49:23.231Z] 8614.20 IOPS, 33.65 MiB/s [2024-11-05T11:49:23.231Z] 8601.17 IOPS, 33.60 MiB/s [2024-11-05T11:49:23.231Z] 8585.29 IOPS, 33.54 MiB/s [2024-11-05T11:49:23.231Z] 8558.00 IOPS, 33.43 MiB/s [2024-11-05T11:49:23.231Z] 8540.56 IOPS, 33.36 MiB/s [2024-11-05T11:49:23.231Z] 8554.20 IOPS, 33.41 MiB/s [2024-11-05T11:49:23.231Z] 8572.36 IOPS, 33.49 MiB/s [2024-11-05T11:49:23.231Z] 8558.17 IOPS, 33.43 MiB/s [2024-11-05T11:49:23.231Z] 8573.46 IOPS, 33.49 MiB/s [2024-11-05T11:49:23.231Z] 8577.57 IOPS, 33.51 MiB/s [2024-11-05T11:49:23.231Z] 8580.67 IOPS, 33.52 MiB/s [2024-11-05T11:49:23.231Z] [2024-11-05 12:49:03.637987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.638375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.638391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.639646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.639685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.639728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.639757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.993 [2024-11-05 12:49:03.639781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.993 [2024-11-05 12:49:03.639798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.639824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.994 [2024-11-05 12:49:03.639841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.639872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.994 [2024-11-05 12:49:03.639890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.639918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.994 [2024-11-05 12:49:03.639934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.639983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.994 [2024-11-05 12:49:03.640644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.640967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.640988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.994 [2024-11-05 12:49:03.641341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.994 [2024-11-05 12:49:03.641356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.641984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.642967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.643014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.643040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.643056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.643081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.643097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.643123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.995 [2024-11-05 12:49:03.643138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.995 [2024-11-05 12:49:03.643163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.996 [2024-11-05 12:49:03.643179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.996 [2024-11-05 12:49:03.643236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.643961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.643995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.996 [2024-11-05 12:49:03.644714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.996 [2024-11-05 12:49:03.644739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.644755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:03.644782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.644798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:03.644822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.644844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:03.644893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.644912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:03.644938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.644954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:03.644980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.644995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:03.645021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:03.645037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.997 8100.81 IOPS, 31.64 MiB/s [2024-11-05T11:49:23.235Z] 7624.29 IOPS, 29.78 MiB/s [2024-11-05T11:49:23.235Z] 7200.72 IOPS, 28.13 MiB/s [2024-11-05T11:49:23.235Z] 6821.74 IOPS, 26.65 MiB/s [2024-11-05T11:49:23.235Z] 6862.60 IOPS, 26.81 MiB/s [2024-11-05T11:49:23.235Z] 6946.14 IOPS, 27.13 MiB/s [2024-11-05T11:49:23.235Z] 7033.73 IOPS, 27.48 MiB/s [2024-11-05T11:49:23.235Z] 7214.70 IOPS, 28.18 MiB/s [2024-11-05T11:49:23.235Z] 7383.62 IOPS, 28.84 MiB/s [2024-11-05T11:49:23.235Z] 7521.64 IOPS, 29.38 MiB/s [2024-11-05T11:49:23.235Z] 7572.54 IOPS, 29.58 MiB/s [2024-11-05T11:49:23.235Z] 7602.70 IOPS, 29.70 MiB/s [2024-11-05T11:49:23.235Z] 7626.36 IOPS, 29.79 MiB/s [2024-11-05T11:49:23.235Z] 7692.97 IOPS, 30.05 MiB/s [2024-11-05T11:49:23.235Z] 7811.10 IOPS, 30.51 MiB/s [2024-11-05T11:49:23.235Z] 7914.61 IOPS, 30.92 MiB/s [2024-11-05T11:49:23.235Z] [2024-11-05 12:49:20.266711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.266791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.266879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.266917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.266944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.266961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.269939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.269976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.269997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.270485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.997 [2024-11-05 12:49:20.270531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.997 [2024-11-05 12:49:20.270594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.997 [2024-11-05 12:49:20.270612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.997 7986.38 IOPS, 31.20 MiB/s [2024-11-05T11:49:23.235Z] 8005.52 IOPS, 31.27 MiB/s [2024-11-05T11:49:23.235Z] 8025.12 IOPS, 31.35 MiB/s [2024-11-05T11:49:23.235Z] Received shutdown signal, test time was about 34.417780 seconds 00:34:53.997 00:34:53.997 Latency(us) 00:34:53.997 [2024-11-05T11:49:23.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.997 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:53.997 Verification LBA range: start 0x0 length 0x4000 00:34:53.997 Nvme0n1 : 34.42 8026.74 31.35 0.00 0.00 15919.27 442.97 4026531.84 00:34:53.997 [2024-11-05T11:49:23.235Z] =================================================================================================================== 00:34:53.997 [2024-11-05T11:49:23.235Z] Total : 8026.74 31.35 0.00 0.00 15919.27 442.97 4026531.84 00:34:53.997 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:54.255 rmmod nvme_tcp 00:34:54.255 rmmod nvme_fabrics 00:34:54.255 rmmod nvme_keyring 00:34:54.255 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 782057 ']' 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 782057 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 782057 ']' 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 782057 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 782057 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 782057' 00:34:54.514 killing process with pid 782057 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 782057 00:34:54.514 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 782057 00:34:54.773 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:54.773 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:54.773 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:54.773 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:54.773 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:54.773 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:54.774 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:54.774 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:54.774 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:54.774 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.774 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:54.774 12:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:56.680 00:34:56.680 real 0m43.305s 00:34:56.680 user 2m10.801s 00:34:56.680 sys 0m11.239s 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:56.680 ************************************ 00:34:56.680 END TEST nvmf_host_multipath_status 00:34:56.680 ************************************ 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.680 ************************************ 00:34:56.680 START TEST nvmf_discovery_remove_ifc 00:34:56.680 ************************************ 00:34:56.680 12:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:56.939 * Looking for test storage... 00:34:56.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:56.939 12:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:56.939 12:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:34:56.939 12:49:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.939 --rc genhtml_branch_coverage=1 00:34:56.939 --rc genhtml_function_coverage=1 00:34:56.939 --rc genhtml_legend=1 00:34:56.939 --rc geninfo_all_blocks=1 00:34:56.939 --rc geninfo_unexecuted_blocks=1 00:34:56.939 00:34:56.939 ' 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.939 --rc genhtml_branch_coverage=1 00:34:56.939 --rc genhtml_function_coverage=1 00:34:56.939 --rc genhtml_legend=1 00:34:56.939 --rc geninfo_all_blocks=1 00:34:56.939 --rc geninfo_unexecuted_blocks=1 00:34:56.939 00:34:56.939 ' 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:56.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.939 --rc genhtml_branch_coverage=1 00:34:56.939 --rc genhtml_function_coverage=1 00:34:56.939 --rc genhtml_legend=1 00:34:56.939 --rc geninfo_all_blocks=1 00:34:56.939 --rc geninfo_unexecuted_blocks=1 00:34:56.939 00:34:56.939 ' 00:34:56.939 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:56.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.940 --rc genhtml_branch_coverage=1 00:34:56.940 --rc genhtml_function_coverage=1 00:34:56.940 --rc genhtml_legend=1 00:34:56.940 --rc geninfo_all_blocks=1 00:34:56.940 --rc geninfo_unexecuted_blocks=1 00:34:56.940 00:34:56.940 ' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:56.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:56.940 12:49:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:59.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:59.473 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:59.473 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:59.473 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.473 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:34:59.474 00:34:59.474 --- 10.0.0.2 ping statistics --- 00:34:59.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.474 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:34:59.474 00:34:59.474 --- 10.0.0.1 ping statistics --- 00:34:59.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.474 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=788675 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 788675 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 788675 ']' 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:59.474 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.474 [2024-11-05 12:49:28.517031] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:34:59.474 [2024-11-05 12:49:28.517122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.474 [2024-11-05 12:49:28.588631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.474 [2024-11-05 12:49:28.632554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.474 [2024-11-05 12:49:28.632608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.474 [2024-11-05 12:49:28.632645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.474 [2024-11-05 12:49:28.632662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.474 [2024-11-05 12:49:28.632672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.474 [2024-11-05 12:49:28.633305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.732 [2024-11-05 12:49:28.780101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.732 [2024-11-05 12:49:28.788328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:59.732 null0 00:34:59.732 [2024-11-05 12:49:28.820271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=788742 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 788742 /tmp/host.sock 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 788742 ']' 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:59.732 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:59.732 12:49:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.732 [2024-11-05 12:49:28.889444] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:34:59.732 [2024-11-05 12:49:28.889522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788742 ] 00:34:59.732 [2024-11-05 12:49:28.958492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.989 [2024-11-05 12:49:29.003983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:59.989 12:49:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.359 [2024-11-05 12:49:30.237294] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:01.359 [2024-11-05 12:49:30.237325] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:01.359 [2024-11-05 12:49:30.237345] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:01.359 [2024-11-05 12:49:30.365755] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:01.359 [2024-11-05 12:49:30.426459] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:01.359 [2024-11-05 12:49:30.427443] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x19a4370:1 started. 00:35:01.359 [2024-11-05 12:49:30.429156] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:01.359 [2024-11-05 12:49:30.429237] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:01.360 [2024-11-05 12:49:30.429284] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:01.360 [2024-11-05 12:49:30.429305] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:01.360 [2024-11-05 12:49:30.429333] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.360 [2024-11-05 12:49:30.436108] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x19a4370 was disconnected and freed. delete nvme_qpair. 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:01.360 12:49:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:02.732 12:49:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:03.663 12:49:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:04.595 12:49:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:05.529 12:49:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:06.949 12:49:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:06.949 [2024-11-05 12:49:35.870675] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:06.949 [2024-11-05 12:49:35.870739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.949 [2024-11-05 12:49:35.870760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.949 [2024-11-05 12:49:35.870777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.949 [2024-11-05 12:49:35.870790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.949 [2024-11-05 12:49:35.870803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.949 [2024-11-05 12:49:35.870816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.949 [2024-11-05 12:49:35.870854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.950 [2024-11-05 12:49:35.870875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.950 [2024-11-05 12:49:35.870889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:06.950 [2024-11-05 12:49:35.870901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:06.950 [2024-11-05 12:49:35.870914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980bc0 is same with the state(6) to be set 00:35:06.950 [2024-11-05 12:49:35.880695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980bc0 (9): Bad file descriptor 00:35:06.950 [2024-11-05 12:49:35.890740] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:06.950 [2024-11-05 12:49:35.890761] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:06.950 [2024-11-05 12:49:35.890771] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:06.950 [2024-11-05 12:49:35.890783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:06.950 [2024-11-05 12:49:35.890824] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:07.910 [2024-11-05 12:49:36.894915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:07.910 [2024-11-05 12:49:36.894971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1980bc0 with addr=10.0.0.2, port=4420 00:35:07.910 [2024-11-05 12:49:36.894994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1980bc0 is same with the state(6) to be set 00:35:07.910 [2024-11-05 12:49:36.895033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980bc0 (9): Bad file descriptor 00:35:07.910 [2024-11-05 12:49:36.895506] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:07.910 [2024-11-05 12:49:36.895547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:07.910 [2024-11-05 12:49:36.895564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:07.910 [2024-11-05 12:49:36.895580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:07.910 [2024-11-05 12:49:36.895593] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:07.910 [2024-11-05 12:49:36.895605] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:07.910 [2024-11-05 12:49:36.895613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:07.910 [2024-11-05 12:49:36.895634] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:07.910 [2024-11-05 12:49:36.895643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:07.910 12:49:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:08.843 [2024-11-05 12:49:37.898156] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:08.843 [2024-11-05 12:49:37.898215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:08.843 [2024-11-05 12:49:37.898245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:08.843 [2024-11-05 12:49:37.898259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:08.843 [2024-11-05 12:49:37.898272] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:08.843 [2024-11-05 12:49:37.898286] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:08.843 [2024-11-05 12:49:37.898296] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:08.843 [2024-11-05 12:49:37.898304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:08.843 [2024-11-05 12:49:37.898348] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:08.843 [2024-11-05 12:49:37.898411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.843 [2024-11-05 12:49:37.898434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.843 [2024-11-05 12:49:37.898453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.843 [2024-11-05 12:49:37.898466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.843 [2024-11-05 12:49:37.898481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.844 [2024-11-05 12:49:37.898495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.844 [2024-11-05 12:49:37.898508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.844 [2024-11-05 12:49:37.898521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.844 [2024-11-05 12:49:37.898535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:08.844 [2024-11-05 12:49:37.898547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.844 [2024-11-05 12:49:37.898560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:08.844 [2024-11-05 12:49:37.898611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19702d0 (9): Bad file descriptor 00:35:08.844 [2024-11-05 12:49:37.899599] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:08.844 [2024-11-05 12:49:37.899621] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:08.844 12:49:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:08.844 12:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.844 12:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:08.844 12:49:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:10.216 12:49:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:10.781 [2024-11-05 12:49:39.954423] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:10.781 [2024-11-05 12:49:39.954462] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:10.781 [2024-11-05 12:49:39.954484] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:11.039 [2024-11-05 12:49:40.082937] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:11.039 12:49:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:11.297 [2024-11-05 12:49:40.302639] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:11.297 [2024-11-05 12:49:40.303401] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1954c70:1 started. 00:35:11.297 [2024-11-05 12:49:40.304743] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:11.297 [2024-11-05 12:49:40.304784] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:11.297 [2024-11-05 12:49:40.304813] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:11.297 [2024-11-05 12:49:40.304834] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:11.297 [2024-11-05 12:49:40.304870] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:11.297 [2024-11-05 12:49:40.312154] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1954c70 was disconnected and freed. delete nvme_qpair. 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 788742 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 788742 ']' 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 788742 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 788742 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 788742' 00:35:12.230 killing process with pid 788742 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 788742 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 788742 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:12.230 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:12.231 rmmod nvme_tcp 00:35:12.231 rmmod nvme_fabrics 00:35:12.231 rmmod nvme_keyring 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 788675 ']' 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 788675 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 788675 ']' 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 788675 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:12.231 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 788675 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 788675' 00:35:12.489 killing process with pid 788675 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 788675 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 788675 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:12.489 12:49:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.026 00:35:15.026 real 0m17.850s 00:35:15.026 user 0m25.639s 00:35:15.026 sys 0m3.062s 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:15.026 ************************************ 00:35:15.026 END TEST nvmf_discovery_remove_ifc 00:35:15.026 ************************************ 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.026 ************************************ 00:35:15.026 START TEST nvmf_identify_kernel_target 00:35:15.026 ************************************ 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:15.026 * Looking for test storage... 00:35:15.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:15.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.026 --rc genhtml_branch_coverage=1 00:35:15.026 --rc genhtml_function_coverage=1 00:35:15.026 --rc genhtml_legend=1 00:35:15.026 --rc geninfo_all_blocks=1 00:35:15.026 --rc geninfo_unexecuted_blocks=1 00:35:15.026 00:35:15.026 ' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:15.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.026 --rc genhtml_branch_coverage=1 00:35:15.026 --rc genhtml_function_coverage=1 00:35:15.026 --rc genhtml_legend=1 00:35:15.026 --rc geninfo_all_blocks=1 00:35:15.026 --rc geninfo_unexecuted_blocks=1 00:35:15.026 00:35:15.026 ' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:15.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.026 --rc genhtml_branch_coverage=1 00:35:15.026 --rc genhtml_function_coverage=1 00:35:15.026 --rc genhtml_legend=1 00:35:15.026 --rc geninfo_all_blocks=1 00:35:15.026 --rc geninfo_unexecuted_blocks=1 00:35:15.026 00:35:15.026 ' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:15.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.026 --rc genhtml_branch_coverage=1 00:35:15.026 --rc genhtml_function_coverage=1 00:35:15.026 --rc genhtml_legend=1 00:35:15.026 --rc geninfo_all_blocks=1 00:35:15.026 --rc geninfo_unexecuted_blocks=1 00:35:15.026 00:35:15.026 ' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.026 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:15.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:15.027 12:49:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:16.928 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:16.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:16.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:16.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.928 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.929 12:49:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:35:16.929 00:35:16.929 --- 10.0.0.2 ping statistics --- 00:35:16.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.929 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:35:16.929 00:35:16.929 --- 10.0.0.1 ping statistics --- 00:35:16.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.929 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:16.929 12:49:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:18.306 Waiting for block devices as requested 00:35:18.306 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:18.306 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:18.564 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:18.564 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:18.564 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:18.822 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:18.822 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:18.822 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:18.822 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:18.822 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:19.080 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:19.080 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:19.080 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:19.338 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:19.338 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:19.338 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:19.338 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:19.596 No valid GPT data, bailing 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:19.596 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:19.596 00:35:19.597 Discovery Log Number of Records 2, Generation counter 2 00:35:19.597 =====Discovery Log Entry 0====== 00:35:19.597 trtype: tcp 00:35:19.597 adrfam: ipv4 00:35:19.597 subtype: current discovery subsystem 00:35:19.597 treq: not specified, sq flow control disable supported 00:35:19.597 portid: 1 00:35:19.597 trsvcid: 4420 00:35:19.597 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:19.597 traddr: 10.0.0.1 00:35:19.597 eflags: none 00:35:19.597 sectype: none 00:35:19.597 =====Discovery Log Entry 1====== 00:35:19.597 trtype: tcp 00:35:19.597 adrfam: ipv4 00:35:19.597 subtype: nvme subsystem 00:35:19.597 treq: not specified, sq flow control disable supported 00:35:19.597 portid: 1 00:35:19.597 trsvcid: 4420 00:35:19.597 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:19.597 traddr: 10.0.0.1 00:35:19.597 eflags: none 00:35:19.597 sectype: none 00:35:19.597 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:19.597 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:19.857 ===================================================== 00:35:19.857 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:19.857 ===================================================== 00:35:19.857 Controller Capabilities/Features 00:35:19.857 ================================ 00:35:19.857 Vendor ID: 0000 00:35:19.857 Subsystem Vendor ID: 0000 00:35:19.857 Serial Number: 23dc79ff066b0923e338 00:35:19.857 Model Number: Linux 00:35:19.857 Firmware Version: 6.8.9-20 00:35:19.857 Recommended Arb Burst: 0 00:35:19.857 IEEE OUI Identifier: 00 00 00 00:35:19.857 Multi-path I/O 00:35:19.857 May have multiple subsystem ports: No 00:35:19.857 May have multiple controllers: No 00:35:19.857 Associated with SR-IOV VF: No 00:35:19.857 Max Data Transfer Size: Unlimited 00:35:19.857 Max Number of Namespaces: 0 00:35:19.857 Max Number of I/O Queues: 1024 00:35:19.857 NVMe Specification Version (VS): 1.3 00:35:19.857 NVMe Specification Version (Identify): 1.3 00:35:19.857 Maximum Queue Entries: 1024 00:35:19.857 Contiguous Queues Required: No 00:35:19.857 Arbitration Mechanisms Supported 00:35:19.857 Weighted Round Robin: Not Supported 00:35:19.857 Vendor Specific: Not Supported 00:35:19.857 Reset Timeout: 7500 ms 00:35:19.857 Doorbell Stride: 4 bytes 00:35:19.857 NVM Subsystem Reset: Not Supported 00:35:19.857 Command Sets Supported 00:35:19.857 NVM Command Set: Supported 00:35:19.857 Boot Partition: Not Supported 00:35:19.857 Memory Page Size Minimum: 4096 bytes 00:35:19.857 Memory Page Size Maximum: 4096 bytes 00:35:19.857 Persistent Memory Region: Not Supported 00:35:19.857 Optional Asynchronous Events Supported 00:35:19.857 Namespace Attribute Notices: Not Supported 00:35:19.857 Firmware Activation Notices: Not Supported 00:35:19.857 ANA Change Notices: Not Supported 00:35:19.857 PLE Aggregate Log Change Notices: Not Supported 00:35:19.857 LBA Status Info Alert Notices: Not Supported 00:35:19.857 EGE Aggregate Log Change Notices: Not Supported 00:35:19.857 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.857 Zone Descriptor Change Notices: Not Supported 00:35:19.857 Discovery Log Change Notices: Supported 00:35:19.857 Controller Attributes 00:35:19.857 128-bit Host Identifier: Not Supported 00:35:19.857 Non-Operational Permissive Mode: Not Supported 00:35:19.857 NVM Sets: Not Supported 00:35:19.857 Read Recovery Levels: Not Supported 00:35:19.857 Endurance Groups: Not Supported 00:35:19.857 Predictable Latency Mode: Not Supported 00:35:19.857 Traffic Based Keep ALive: Not Supported 00:35:19.857 Namespace Granularity: Not Supported 00:35:19.857 SQ Associations: Not Supported 00:35:19.857 UUID List: Not Supported 00:35:19.857 Multi-Domain Subsystem: Not Supported 00:35:19.857 Fixed Capacity Management: Not Supported 00:35:19.857 Variable Capacity Management: Not Supported 00:35:19.857 Delete Endurance Group: Not Supported 00:35:19.857 Delete NVM Set: Not Supported 00:35:19.857 Extended LBA Formats Supported: Not Supported 00:35:19.857 Flexible Data Placement Supported: Not Supported 00:35:19.857 00:35:19.857 Controller Memory Buffer Support 00:35:19.857 ================================ 00:35:19.857 Supported: No 00:35:19.857 00:35:19.857 Persistent Memory Region Support 00:35:19.857 ================================ 00:35:19.857 Supported: No 00:35:19.857 00:35:19.857 Admin Command Set Attributes 00:35:19.857 ============================ 00:35:19.857 Security Send/Receive: Not Supported 00:35:19.857 Format NVM: Not Supported 00:35:19.857 Firmware Activate/Download: Not Supported 00:35:19.857 Namespace Management: Not Supported 00:35:19.857 Device Self-Test: Not Supported 00:35:19.857 Directives: Not Supported 00:35:19.857 NVMe-MI: Not Supported 00:35:19.857 Virtualization Management: Not Supported 00:35:19.857 Doorbell Buffer Config: Not Supported 00:35:19.857 Get LBA Status Capability: Not Supported 00:35:19.857 Command & Feature Lockdown Capability: Not Supported 00:35:19.857 Abort Command Limit: 1 00:35:19.857 Async Event Request Limit: 1 00:35:19.857 Number of Firmware Slots: N/A 00:35:19.857 Firmware Slot 1 Read-Only: N/A 00:35:19.857 Firmware Activation Without Reset: N/A 00:35:19.857 Multiple Update Detection Support: N/A 00:35:19.857 Firmware Update Granularity: No Information Provided 00:35:19.857 Per-Namespace SMART Log: No 00:35:19.857 Asymmetric Namespace Access Log Page: Not Supported 00:35:19.857 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:19.857 Command Effects Log Page: Not Supported 00:35:19.857 Get Log Page Extended Data: Supported 00:35:19.857 Telemetry Log Pages: Not Supported 00:35:19.857 Persistent Event Log Pages: Not Supported 00:35:19.857 Supported Log Pages Log Page: May Support 00:35:19.857 Commands Supported & Effects Log Page: Not Supported 00:35:19.857 Feature Identifiers & Effects Log Page:May Support 00:35:19.857 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.857 Data Area 4 for Telemetry Log: Not Supported 00:35:19.857 Error Log Page Entries Supported: 1 00:35:19.857 Keep Alive: Not Supported 00:35:19.857 00:35:19.857 NVM Command Set Attributes 00:35:19.857 ========================== 00:35:19.857 Submission Queue Entry Size 00:35:19.857 Max: 1 00:35:19.857 Min: 1 00:35:19.857 Completion Queue Entry Size 00:35:19.857 Max: 1 00:35:19.857 Min: 1 00:35:19.857 Number of Namespaces: 0 00:35:19.857 Compare Command: Not Supported 00:35:19.857 Write Uncorrectable Command: Not Supported 00:35:19.857 Dataset Management Command: Not Supported 00:35:19.857 Write Zeroes Command: Not Supported 00:35:19.857 Set Features Save Field: Not Supported 00:35:19.857 Reservations: Not Supported 00:35:19.857 Timestamp: Not Supported 00:35:19.857 Copy: Not Supported 00:35:19.857 Volatile Write Cache: Not Present 00:35:19.857 Atomic Write Unit (Normal): 1 00:35:19.857 Atomic Write Unit (PFail): 1 00:35:19.857 Atomic Compare & Write Unit: 1 00:35:19.857 Fused Compare & Write: Not Supported 00:35:19.857 Scatter-Gather List 00:35:19.857 SGL Command Set: Supported 00:35:19.857 SGL Keyed: Not Supported 00:35:19.858 SGL Bit Bucket Descriptor: Not Supported 00:35:19.858 SGL Metadata Pointer: Not Supported 00:35:19.858 Oversized SGL: Not Supported 00:35:19.858 SGL Metadata Address: Not Supported 00:35:19.858 SGL Offset: Supported 00:35:19.858 Transport SGL Data Block: Not Supported 00:35:19.858 Replay Protected Memory Block: Not Supported 00:35:19.858 00:35:19.858 Firmware Slot Information 00:35:19.858 ========================= 00:35:19.858 Active slot: 0 00:35:19.858 00:35:19.858 00:35:19.858 Error Log 00:35:19.858 ========= 00:35:19.858 00:35:19.858 Active Namespaces 00:35:19.858 ================= 00:35:19.858 Discovery Log Page 00:35:19.858 ================== 00:35:19.858 Generation Counter: 2 00:35:19.858 Number of Records: 2 00:35:19.858 Record Format: 0 00:35:19.858 00:35:19.858 Discovery Log Entry 0 00:35:19.858 ---------------------- 00:35:19.858 Transport Type: 3 (TCP) 00:35:19.858 Address Family: 1 (IPv4) 00:35:19.858 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:19.858 Entry Flags: 00:35:19.858 Duplicate Returned Information: 0 00:35:19.858 Explicit Persistent Connection Support for Discovery: 0 00:35:19.858 Transport Requirements: 00:35:19.858 Secure Channel: Not Specified 00:35:19.858 Port ID: 1 (0x0001) 00:35:19.858 Controller ID: 65535 (0xffff) 00:35:19.858 Admin Max SQ Size: 32 00:35:19.858 Transport Service Identifier: 4420 00:35:19.858 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:19.858 Transport Address: 10.0.0.1 00:35:19.858 Discovery Log Entry 1 00:35:19.858 ---------------------- 00:35:19.858 Transport Type: 3 (TCP) 00:35:19.858 Address Family: 1 (IPv4) 00:35:19.858 Subsystem Type: 2 (NVM Subsystem) 00:35:19.858 Entry Flags: 00:35:19.858 Duplicate Returned Information: 0 00:35:19.858 Explicit Persistent Connection Support for Discovery: 0 00:35:19.858 Transport Requirements: 00:35:19.858 Secure Channel: Not Specified 00:35:19.858 Port ID: 1 (0x0001) 00:35:19.858 Controller ID: 65535 (0xffff) 00:35:19.858 Admin Max SQ Size: 32 00:35:19.858 Transport Service Identifier: 4420 00:35:19.858 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:19.858 Transport Address: 10.0.0.1 00:35:19.858 12:49:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:19.858 get_feature(0x01) failed 00:35:19.858 get_feature(0x02) failed 00:35:19.858 get_feature(0x04) failed 00:35:19.858 ===================================================== 00:35:19.858 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:19.858 ===================================================== 00:35:19.858 Controller Capabilities/Features 00:35:19.858 ================================ 00:35:19.858 Vendor ID: 0000 00:35:19.858 Subsystem Vendor ID: 0000 00:35:19.858 Serial Number: 293404306d383388a4ae 00:35:19.858 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:19.858 Firmware Version: 6.8.9-20 00:35:19.858 Recommended Arb Burst: 6 00:35:19.858 IEEE OUI Identifier: 00 00 00 00:35:19.858 Multi-path I/O 00:35:19.858 May have multiple subsystem ports: Yes 00:35:19.858 May have multiple controllers: Yes 00:35:19.858 Associated with SR-IOV VF: No 00:35:19.858 Max Data Transfer Size: Unlimited 00:35:19.858 Max Number of Namespaces: 1024 00:35:19.858 Max Number of I/O Queues: 128 00:35:19.858 NVMe Specification Version (VS): 1.3 00:35:19.858 NVMe Specification Version (Identify): 1.3 00:35:19.858 Maximum Queue Entries: 1024 00:35:19.858 Contiguous Queues Required: No 00:35:19.858 Arbitration Mechanisms Supported 00:35:19.858 Weighted Round Robin: Not Supported 00:35:19.858 Vendor Specific: Not Supported 00:35:19.858 Reset Timeout: 7500 ms 00:35:19.858 Doorbell Stride: 4 bytes 00:35:19.858 NVM Subsystem Reset: Not Supported 00:35:19.858 Command Sets Supported 00:35:19.858 NVM Command Set: Supported 00:35:19.858 Boot Partition: Not Supported 00:35:19.858 Memory Page Size Minimum: 4096 bytes 00:35:19.858 Memory Page Size Maximum: 4096 bytes 00:35:19.858 Persistent Memory Region: Not Supported 00:35:19.858 Optional Asynchronous Events Supported 00:35:19.858 Namespace Attribute Notices: Supported 00:35:19.858 Firmware Activation Notices: Not Supported 00:35:19.858 ANA Change Notices: Supported 00:35:19.858 PLE Aggregate Log Change Notices: Not Supported 00:35:19.858 LBA Status Info Alert Notices: Not Supported 00:35:19.858 EGE Aggregate Log Change Notices: Not Supported 00:35:19.858 Normal NVM Subsystem Shutdown event: Not Supported 00:35:19.858 Zone Descriptor Change Notices: Not Supported 00:35:19.858 Discovery Log Change Notices: Not Supported 00:35:19.858 Controller Attributes 00:35:19.858 128-bit Host Identifier: Supported 00:35:19.858 Non-Operational Permissive Mode: Not Supported 00:35:19.858 NVM Sets: Not Supported 00:35:19.858 Read Recovery Levels: Not Supported 00:35:19.858 Endurance Groups: Not Supported 00:35:19.858 Predictable Latency Mode: Not Supported 00:35:19.858 Traffic Based Keep ALive: Supported 00:35:19.858 Namespace Granularity: Not Supported 00:35:19.858 SQ Associations: Not Supported 00:35:19.858 UUID List: Not Supported 00:35:19.858 Multi-Domain Subsystem: Not Supported 00:35:19.858 Fixed Capacity Management: Not Supported 00:35:19.858 Variable Capacity Management: Not Supported 00:35:19.858 Delete Endurance Group: Not Supported 00:35:19.858 Delete NVM Set: Not Supported 00:35:19.858 Extended LBA Formats Supported: Not Supported 00:35:19.858 Flexible Data Placement Supported: Not Supported 00:35:19.858 00:35:19.858 Controller Memory Buffer Support 00:35:19.858 ================================ 00:35:19.858 Supported: No 00:35:19.858 00:35:19.858 Persistent Memory Region Support 00:35:19.858 ================================ 00:35:19.858 Supported: No 00:35:19.858 00:35:19.858 Admin Command Set Attributes 00:35:19.858 ============================ 00:35:19.858 Security Send/Receive: Not Supported 00:35:19.858 Format NVM: Not Supported 00:35:19.858 Firmware Activate/Download: Not Supported 00:35:19.858 Namespace Management: Not Supported 00:35:19.858 Device Self-Test: Not Supported 00:35:19.858 Directives: Not Supported 00:35:19.858 NVMe-MI: Not Supported 00:35:19.858 Virtualization Management: Not Supported 00:35:19.858 Doorbell Buffer Config: Not Supported 00:35:19.858 Get LBA Status Capability: Not Supported 00:35:19.858 Command & Feature Lockdown Capability: Not Supported 00:35:19.858 Abort Command Limit: 4 00:35:19.858 Async Event Request Limit: 4 00:35:19.858 Number of Firmware Slots: N/A 00:35:19.858 Firmware Slot 1 Read-Only: N/A 00:35:19.858 Firmware Activation Without Reset: N/A 00:35:19.858 Multiple Update Detection Support: N/A 00:35:19.858 Firmware Update Granularity: No Information Provided 00:35:19.858 Per-Namespace SMART Log: Yes 00:35:19.858 Asymmetric Namespace Access Log Page: Supported 00:35:19.858 ANA Transition Time : 10 sec 00:35:19.858 00:35:19.858 Asymmetric Namespace Access Capabilities 00:35:19.858 ANA Optimized State : Supported 00:35:19.858 ANA Non-Optimized State : Supported 00:35:19.858 ANA Inaccessible State : Supported 00:35:19.858 ANA Persistent Loss State : Supported 00:35:19.858 ANA Change State : Supported 00:35:19.858 ANAGRPID is not changed : No 00:35:19.858 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:19.858 00:35:19.858 ANA Group Identifier Maximum : 128 00:35:19.858 Number of ANA Group Identifiers : 128 00:35:19.858 Max Number of Allowed Namespaces : 1024 00:35:19.858 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:19.858 Command Effects Log Page: Supported 00:35:19.858 Get Log Page Extended Data: Supported 00:35:19.858 Telemetry Log Pages: Not Supported 00:35:19.858 Persistent Event Log Pages: Not Supported 00:35:19.858 Supported Log Pages Log Page: May Support 00:35:19.858 Commands Supported & Effects Log Page: Not Supported 00:35:19.858 Feature Identifiers & Effects Log Page:May Support 00:35:19.858 NVMe-MI Commands & Effects Log Page: May Support 00:35:19.858 Data Area 4 for Telemetry Log: Not Supported 00:35:19.858 Error Log Page Entries Supported: 128 00:35:19.858 Keep Alive: Supported 00:35:19.858 Keep Alive Granularity: 1000 ms 00:35:19.858 00:35:19.858 NVM Command Set Attributes 00:35:19.858 ========================== 00:35:19.858 Submission Queue Entry Size 00:35:19.858 Max: 64 00:35:19.858 Min: 64 00:35:19.858 Completion Queue Entry Size 00:35:19.858 Max: 16 00:35:19.859 Min: 16 00:35:19.859 Number of Namespaces: 1024 00:35:19.859 Compare Command: Not Supported 00:35:19.859 Write Uncorrectable Command: Not Supported 00:35:19.859 Dataset Management Command: Supported 00:35:19.859 Write Zeroes Command: Supported 00:35:19.859 Set Features Save Field: Not Supported 00:35:19.859 Reservations: Not Supported 00:35:19.859 Timestamp: Not Supported 00:35:19.859 Copy: Not Supported 00:35:19.859 Volatile Write Cache: Present 00:35:19.859 Atomic Write Unit (Normal): 1 00:35:19.859 Atomic Write Unit (PFail): 1 00:35:19.859 Atomic Compare & Write Unit: 1 00:35:19.859 Fused Compare & Write: Not Supported 00:35:19.859 Scatter-Gather List 00:35:19.859 SGL Command Set: Supported 00:35:19.859 SGL Keyed: Not Supported 00:35:19.859 SGL Bit Bucket Descriptor: Not Supported 00:35:19.859 SGL Metadata Pointer: Not Supported 00:35:19.859 Oversized SGL: Not Supported 00:35:19.859 SGL Metadata Address: Not Supported 00:35:19.859 SGL Offset: Supported 00:35:19.859 Transport SGL Data Block: Not Supported 00:35:19.859 Replay Protected Memory Block: Not Supported 00:35:19.859 00:35:19.859 Firmware Slot Information 00:35:19.859 ========================= 00:35:19.859 Active slot: 0 00:35:19.859 00:35:19.859 Asymmetric Namespace Access 00:35:19.859 =========================== 00:35:19.859 Change Count : 0 00:35:19.859 Number of ANA Group Descriptors : 1 00:35:19.859 ANA Group Descriptor : 0 00:35:19.859 ANA Group ID : 1 00:35:19.859 Number of NSID Values : 1 00:35:19.859 Change Count : 0 00:35:19.859 ANA State : 1 00:35:19.859 Namespace Identifier : 1 00:35:19.859 00:35:19.859 Commands Supported and Effects 00:35:19.859 ============================== 00:35:19.859 Admin Commands 00:35:19.859 -------------- 00:35:19.859 Get Log Page (02h): Supported 00:35:19.859 Identify (06h): Supported 00:35:19.859 Abort (08h): Supported 00:35:19.859 Set Features (09h): Supported 00:35:19.859 Get Features (0Ah): Supported 00:35:19.859 Asynchronous Event Request (0Ch): Supported 00:35:19.859 Keep Alive (18h): Supported 00:35:19.859 I/O Commands 00:35:19.859 ------------ 00:35:19.859 Flush (00h): Supported 00:35:19.859 Write (01h): Supported LBA-Change 00:35:19.859 Read (02h): Supported 00:35:19.859 Write Zeroes (08h): Supported LBA-Change 00:35:19.859 Dataset Management (09h): Supported 00:35:19.859 00:35:19.859 Error Log 00:35:19.859 ========= 00:35:19.859 Entry: 0 00:35:19.859 Error Count: 0x3 00:35:19.859 Submission Queue Id: 0x0 00:35:19.859 Command Id: 0x5 00:35:19.859 Phase Bit: 0 00:35:19.859 Status Code: 0x2 00:35:19.859 Status Code Type: 0x0 00:35:19.859 Do Not Retry: 1 00:35:19.859 Error Location: 0x28 00:35:19.859 LBA: 0x0 00:35:19.859 Namespace: 0x0 00:35:19.859 Vendor Log Page: 0x0 00:35:19.859 ----------- 00:35:19.859 Entry: 1 00:35:19.859 Error Count: 0x2 00:35:19.859 Submission Queue Id: 0x0 00:35:19.859 Command Id: 0x5 00:35:19.859 Phase Bit: 0 00:35:19.859 Status Code: 0x2 00:35:19.859 Status Code Type: 0x0 00:35:19.859 Do Not Retry: 1 00:35:19.859 Error Location: 0x28 00:35:19.859 LBA: 0x0 00:35:19.859 Namespace: 0x0 00:35:19.859 Vendor Log Page: 0x0 00:35:19.859 ----------- 00:35:19.859 Entry: 2 00:35:19.859 Error Count: 0x1 00:35:19.859 Submission Queue Id: 0x0 00:35:19.859 Command Id: 0x4 00:35:19.859 Phase Bit: 0 00:35:19.859 Status Code: 0x2 00:35:19.859 Status Code Type: 0x0 00:35:19.859 Do Not Retry: 1 00:35:19.859 Error Location: 0x28 00:35:19.859 LBA: 0x0 00:35:19.859 Namespace: 0x0 00:35:19.859 Vendor Log Page: 0x0 00:35:19.859 00:35:19.859 Number of Queues 00:35:19.859 ================ 00:35:19.859 Number of I/O Submission Queues: 128 00:35:19.859 Number of I/O Completion Queues: 128 00:35:19.859 00:35:19.859 ZNS Specific Controller Data 00:35:19.859 ============================ 00:35:19.859 Zone Append Size Limit: 0 00:35:19.859 00:35:19.859 00:35:19.859 Active Namespaces 00:35:19.859 ================= 00:35:19.859 get_feature(0x05) failed 00:35:19.859 Namespace ID:1 00:35:19.859 Command Set Identifier: NVM (00h) 00:35:19.859 Deallocate: Supported 00:35:19.859 Deallocated/Unwritten Error: Not Supported 00:35:19.859 Deallocated Read Value: Unknown 00:35:19.859 Deallocate in Write Zeroes: Not Supported 00:35:19.859 Deallocated Guard Field: 0xFFFF 00:35:19.859 Flush: Supported 00:35:19.859 Reservation: Not Supported 00:35:19.859 Namespace Sharing Capabilities: Multiple Controllers 00:35:19.859 Size (in LBAs): 1953525168 (931GiB) 00:35:19.859 Capacity (in LBAs): 1953525168 (931GiB) 00:35:19.859 Utilization (in LBAs): 1953525168 (931GiB) 00:35:19.859 UUID: 62f97dfc-eead-41ae-bfbf-0080ba1737c4 00:35:19.859 Thin Provisioning: Not Supported 00:35:19.859 Per-NS Atomic Units: Yes 00:35:19.859 Atomic Boundary Size (Normal): 0 00:35:19.859 Atomic Boundary Size (PFail): 0 00:35:19.859 Atomic Boundary Offset: 0 00:35:19.859 NGUID/EUI64 Never Reused: No 00:35:19.859 ANA group ID: 1 00:35:19.859 Namespace Write Protected: No 00:35:19.859 Number of LBA Formats: 1 00:35:19.859 Current LBA Format: LBA Format #00 00:35:19.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:19.859 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.859 rmmod nvme_tcp 00:35:19.859 rmmod nvme_fabrics 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.859 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:20.119 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:20.119 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:20.119 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.119 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.119 12:49:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:22.026 12:49:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:23.401 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.401 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:23.401 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:24.336 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:24.336 00:35:24.336 real 0m9.795s 00:35:24.336 user 0m2.126s 00:35:24.336 sys 0m3.684s 00:35:24.336 12:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:24.336 12:49:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:24.336 ************************************ 00:35:24.336 END TEST nvmf_identify_kernel_target 00:35:24.336 ************************************ 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.595 ************************************ 00:35:24.595 START TEST nvmf_auth_host 00:35:24.595 ************************************ 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:24.595 * Looking for test storage... 00:35:24.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:24.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.595 --rc genhtml_branch_coverage=1 00:35:24.595 --rc genhtml_function_coverage=1 00:35:24.595 --rc genhtml_legend=1 00:35:24.595 --rc geninfo_all_blocks=1 00:35:24.595 --rc geninfo_unexecuted_blocks=1 00:35:24.595 00:35:24.595 ' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:24.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.595 --rc genhtml_branch_coverage=1 00:35:24.595 --rc genhtml_function_coverage=1 00:35:24.595 --rc genhtml_legend=1 00:35:24.595 --rc geninfo_all_blocks=1 00:35:24.595 --rc geninfo_unexecuted_blocks=1 00:35:24.595 00:35:24.595 ' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:24.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.595 --rc genhtml_branch_coverage=1 00:35:24.595 --rc genhtml_function_coverage=1 00:35:24.595 --rc genhtml_legend=1 00:35:24.595 --rc geninfo_all_blocks=1 00:35:24.595 --rc geninfo_unexecuted_blocks=1 00:35:24.595 00:35:24.595 ' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:24.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.595 --rc genhtml_branch_coverage=1 00:35:24.595 --rc genhtml_function_coverage=1 00:35:24.595 --rc genhtml_legend=1 00:35:24.595 --rc geninfo_all_blocks=1 00:35:24.595 --rc geninfo_unexecuted_blocks=1 00:35:24.595 00:35:24.595 ' 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.595 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:24.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.596 12:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:27.128 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.128 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:27.129 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:27.129 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:27.129 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:35:27.129 00:35:27.129 --- 10.0.0.2 ping statistics --- 00:35:27.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.129 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:35:27.129 00:35:27.129 --- 10.0.0.1 ping statistics --- 00:35:27.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.129 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=795918 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 795918 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 795918 ']' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:27.129 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=67b0901d6da185f9dcb10c7354ab5a95 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eTo 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 67b0901d6da185f9dcb10c7354ab5a95 0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 67b0901d6da185f9dcb10c7354ab5a95 0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=67b0901d6da185f9dcb10c7354ab5a95 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eTo 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eTo 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eTo 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6804b129782685cd2683227ed26d5e1d9e980ea94e59d897713f9af19322ba89 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8fv 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6804b129782685cd2683227ed26d5e1d9e980ea94e59d897713f9af19322ba89 3 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6804b129782685cd2683227ed26d5e1d9e980ea94e59d897713f9af19322ba89 3 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6804b129782685cd2683227ed26d5e1d9e980ea94e59d897713f9af19322ba89 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8fv 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8fv 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.8fv 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72d60bb2f18dabdf1fb168017e71df246562bd0c027b9806 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.JhJ 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72d60bb2f18dabdf1fb168017e71df246562bd0c027b9806 0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72d60bb2f18dabdf1fb168017e71df246562bd0c027b9806 0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72d60bb2f18dabdf1fb168017e71df246562bd0c027b9806 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.388 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.JhJ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.JhJ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JhJ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=29493ffaa3c613268ba4495129a02cbd21a8c5e64a720817 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RVb 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 29493ffaa3c613268ba4495129a02cbd21a8c5e64a720817 2 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 29493ffaa3c613268ba4495129a02cbd21a8c5e64a720817 2 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=29493ffaa3c613268ba4495129a02cbd21a8c5e64a720817 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RVb 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RVb 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.RVb 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc1e978aeb2b378187846cf24b70fb14 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.loZ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc1e978aeb2b378187846cf24b70fb14 1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc1e978aeb2b378187846cf24b70fb14 1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc1e978aeb2b378187846cf24b70fb14 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.loZ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.loZ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.loZ 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d569bbe4c853c590cf17d30edf580f13 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DSA 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d569bbe4c853c590cf17d30edf580f13 1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d569bbe4c853c590cf17d30edf580f13 1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d569bbe4c853c590cf17d30edf580f13 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DSA 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DSA 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.DSA 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:35:27.647 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dc31cb8b43ebdb80b1e75cb42274318ff7199df77ce5b737 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kUn 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dc31cb8b43ebdb80b1e75cb42274318ff7199df77ce5b737 2 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dc31cb8b43ebdb80b1e75cb42274318ff7199df77ce5b737 2 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dc31cb8b43ebdb80b1e75cb42274318ff7199df77ce5b737 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kUn 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kUn 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kUn 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:35:27.648 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d4d641a11d9aa979f1b0a872907740d9 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fyq 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d4d641a11d9aa979f1b0a872907740d9 0 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d4d641a11d9aa979f1b0a872907740d9 0 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d4d641a11d9aa979f1b0a872907740d9 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fyq 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fyq 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fyq 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e22b0d8ee7502d71a4101b56c1743d4f42a8900fb896102ce3e04ac3ebe6eb9a 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xlu 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e22b0d8ee7502d71a4101b56c1743d4f42a8900fb896102ce3e04ac3ebe6eb9a 3 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e22b0d8ee7502d71a4101b56c1743d4f42a8900fb896102ce3e04ac3ebe6eb9a 3 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e22b0d8ee7502d71a4101b56c1743d4f42a8900fb896102ce3e04ac3ebe6eb9a 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xlu 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xlu 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xlu 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 795918 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 795918 ']' 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:27.906 12:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eTo 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.8fv ]] 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8fv 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.164 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JhJ 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.RVb ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RVb 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.loZ 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.DSA ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DSA 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kUn 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fyq ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fyq 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xlu 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:28.165 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:28.423 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:28.423 12:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:29.356 Waiting for block devices as requested 00:35:29.356 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:29.614 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.614 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.872 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.872 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.872 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.872 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.129 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.129 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.129 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:30.129 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:30.386 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:30.386 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:30.386 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:30.386 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:30.386 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:30.644 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:30.902 No valid GPT data, bailing 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:30.902 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:31.160 00:35:31.160 Discovery Log Number of Records 2, Generation counter 2 00:35:31.160 =====Discovery Log Entry 0====== 00:35:31.160 trtype: tcp 00:35:31.160 adrfam: ipv4 00:35:31.160 subtype: current discovery subsystem 00:35:31.160 treq: not specified, sq flow control disable supported 00:35:31.160 portid: 1 00:35:31.160 trsvcid: 4420 00:35:31.160 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:31.160 traddr: 10.0.0.1 00:35:31.160 eflags: none 00:35:31.160 sectype: none 00:35:31.160 =====Discovery Log Entry 1====== 00:35:31.160 trtype: tcp 00:35:31.160 adrfam: ipv4 00:35:31.160 subtype: nvme subsystem 00:35:31.160 treq: not specified, sq flow control disable supported 00:35:31.160 portid: 1 00:35:31.160 trsvcid: 4420 00:35:31.160 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:31.160 traddr: 10.0.0.1 00:35:31.160 eflags: none 00:35:31.160 sectype: none 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:31.160 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.161 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.419 nvme0n1 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.419 nvme0n1 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.419 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.677 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.678 nvme0n1 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.678 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.937 12:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.937 nvme0n1 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.937 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.196 nvme0n1 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.196 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.454 nvme0n1 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.454 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.455 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.713 nvme0n1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.713 12:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.970 nvme0n1 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.970 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.227 nvme0n1 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.227 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.228 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.486 nvme0n1 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.486 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.745 nvme0n1 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.745 12:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.004 nvme0n1 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.004 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.262 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.521 nvme0n1 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.521 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.522 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.780 nvme0n1 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.780 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.781 12:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.038 nvme0n1 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.038 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.296 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.296 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.297 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.555 nvme0n1 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.555 12:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.121 nvme0n1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.121 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.686 nvme0n1 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:36.686 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.687 12:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.252 nvme0n1 00:35:37.252 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.253 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.819 nvme0n1 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.819 12:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.100 nvme0n1 00:35:38.100 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.100 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.100 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.100 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.100 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.100 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.359 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.360 12:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.293 nvme0n1 00:35:39.293 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.293 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.294 12:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.859 nvme0n1 00:35:40.117 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.118 12:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.051 nvme0n1 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.051 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.983 nvme0n1 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:41.983 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.984 12:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.984 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 nvme0n1 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 nvme0n1 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.917 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.918 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.175 nvme0n1 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.175 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.176 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.434 nvme0n1 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.434 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.692 nvme0n1 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.693 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.951 nvme0n1 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.951 12:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.951 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:43.952 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.952 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.209 nvme0n1 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:44.209 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.210 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.468 nvme0n1 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.468 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.469 nvme0n1 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.469 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.726 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 nvme0n1 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.727 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.985 12:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.985 nvme0n1 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:44.985 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.243 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.244 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.501 nvme0n1 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.501 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.502 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.760 nvme0n1 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.760 12:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.018 nvme0n1 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.018 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.276 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.534 nvme0n1 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.534 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:46.535 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.535 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.793 nvme0n1 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.793 12:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.359 nvme0n1 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.359 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.360 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.925 nvme0n1 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:47.925 12:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.925 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 nvme0n1 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.491 12:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.057 nvme0n1 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.057 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.058 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.623 nvme0n1 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.623 12:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.565 nvme0n1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.565 12:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.501 nvme0n1 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.501 12:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.433 nvme0n1 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:52.433 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:52.434 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:52.434 12:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.366 nvme0n1 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.366 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.367 12:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.931 nvme0n1 00:35:53.931 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.931 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:53.931 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:53.931 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.931 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:54.189 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.190 nvme0n1 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.190 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.448 nvme0n1 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.448 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.449 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.706 nvme0n1 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.706 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.707 12:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.965 nvme0n1 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:54.965 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.966 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.224 nvme0n1 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.224 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.482 nvme0n1 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:55.482 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.483 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.741 nvme0n1 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.741 12:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.999 nvme0n1 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:55.999 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.000 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.257 nvme0n1 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.258 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.516 nvme0n1 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.516 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.774 nvme0n1 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:56.774 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.775 12:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.775 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.340 nvme0n1 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.340 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.341 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.598 nvme0n1 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:57.598 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.599 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.857 nvme0n1 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.857 12:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.115 nvme0n1 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.115 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.680 nvme0n1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.680 12:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 nvme0n1 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.245 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.810 nvme0n1 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:59.810 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.811 12:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.376 nvme0n1 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.376 12:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.941 nvme0n1 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjdiMDkwMWQ2ZGExODVmOWRjYjEwYzczNTRhYjVhOTW7LZVF: 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjgwNGIxMjk3ODI2ODVjZDI2ODMyMjdlZDI2ZDVlMWQ5ZTk4MGVhOTRlNTlkODk3NzEzZjlhZjE5MzIyYmE4OW4QzuI=: 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.941 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.879 nvme0n1 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.879 12:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:01.879 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.880 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.811 nvme0n1 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.811 12:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.743 nvme0n1 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGMzMWNiOGI0M2ViZGI4MGIxZTc1Y2I0MjI3NDMxOGZmNzE5OWRmNzdjZTViNzM3mdjB/w==: 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: ]] 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDRkNjQxYTExZDlhYTk3OWYxYjBhODcyOTA3NzQwZDnNmsSG: 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.743 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:03.744 12:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.690 nvme0n1 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTIyYjBkOGVlNzUwMmQ3MWE0MTAxYjU2YzE3NDNkNGY0MmE4OTAwZmI4OTYxMDJjZTNlMDRhYzNlYmU2ZWI5YQaKCe0=: 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.690 12:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.622 nvme0n1 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.622 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.623 request: 00:36:05.623 { 00:36:05.623 "name": "nvme0", 00:36:05.623 "trtype": "tcp", 00:36:05.623 "traddr": "10.0.0.1", 00:36:05.623 "adrfam": "ipv4", 00:36:05.623 "trsvcid": "4420", 00:36:05.623 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.623 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.623 "prchk_reftag": false, 00:36:05.623 "prchk_guard": false, 00:36:05.623 "hdgst": false, 00:36:05.623 "ddgst": false, 00:36:05.623 "allow_unrecognized_csi": false, 00:36:05.623 "method": "bdev_nvme_attach_controller", 00:36:05.623 "req_id": 1 00:36:05.623 } 00:36:05.623 Got JSON-RPC error response 00:36:05.623 response: 00:36:05.623 { 00:36:05.623 "code": -5, 00:36:05.623 "message": "Input/output error" 00:36:05.623 } 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.623 request: 00:36:05.623 { 00:36:05.623 "name": "nvme0", 00:36:05.623 "trtype": "tcp", 00:36:05.623 "traddr": "10.0.0.1", 00:36:05.623 "adrfam": "ipv4", 00:36:05.623 "trsvcid": "4420", 00:36:05.623 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.623 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.623 "prchk_reftag": false, 00:36:05.623 "prchk_guard": false, 00:36:05.623 "hdgst": false, 00:36:05.623 "ddgst": false, 00:36:05.623 "dhchap_key": "key2", 00:36:05.623 "allow_unrecognized_csi": false, 00:36:05.623 "method": "bdev_nvme_attach_controller", 00:36:05.623 "req_id": 1 00:36:05.623 } 00:36:05.623 Got JSON-RPC error response 00:36:05.623 response: 00:36:05.623 { 00:36:05.623 "code": -5, 00:36:05.623 "message": "Input/output error" 00:36:05.623 } 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.623 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 request: 00:36:05.881 { 00:36:05.881 "name": "nvme0", 00:36:05.881 "trtype": "tcp", 00:36:05.881 "traddr": "10.0.0.1", 00:36:05.881 "adrfam": "ipv4", 00:36:05.881 "trsvcid": "4420", 00:36:05.881 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:05.881 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:05.881 "prchk_reftag": false, 00:36:05.881 "prchk_guard": false, 00:36:05.881 "hdgst": false, 00:36:05.881 "ddgst": false, 00:36:05.881 "dhchap_key": "key1", 00:36:05.881 "dhchap_ctrlr_key": "ckey2", 00:36:05.881 "allow_unrecognized_csi": false, 00:36:05.881 "method": "bdev_nvme_attach_controller", 00:36:05.881 "req_id": 1 00:36:05.881 } 00:36:05.881 Got JSON-RPC error response 00:36:05.881 response: 00:36:05.881 { 00:36:05.881 "code": -5, 00:36:05.881 "message": "Input/output error" 00:36:05.881 } 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.881 12:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 nvme0n1 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.881 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.139 request: 00:36:06.139 { 00:36:06.139 "name": "nvme0", 00:36:06.139 "dhchap_key": "key1", 00:36:06.139 "dhchap_ctrlr_key": "ckey2", 00:36:06.139 "method": "bdev_nvme_set_keys", 00:36:06.139 "req_id": 1 00:36:06.139 } 00:36:06.139 Got JSON-RPC error response 00:36:06.139 response: 00:36:06.139 { 00:36:06.139 "code": -13, 00:36:06.139 "message": "Permission denied" 00:36:06.139 } 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:06.139 12:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:07.072 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.072 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:07.072 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.072 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.072 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.329 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:07.329 12:50:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzJkNjBiYjJmMThkYWJkZjFmYjE2ODAxN2U3MWRmMjQ2NTYyYmQwYzAyN2I5ODA2Rm7xUg==: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Mjk0OTNmZmFhM2M2MTMyNjhiYTQ0OTUxMjlhMDJjYmQyMWE4YzVlNjRhNzIwODE33lRnpQ==: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.312 nvme0n1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZmMxZTk3OGFlYjJiMzc4MTg3ODQ2Y2YyNGI3MGZiMTSs3uEj: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: ]] 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDU2OWJiZTRjODUzYzU5MGNmMTdkMzBlZGY1ODBmMTMX9Cdj: 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.312 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.595 request: 00:36:08.595 { 00:36:08.595 "name": "nvme0", 00:36:08.595 "dhchap_key": "key2", 00:36:08.595 "dhchap_ctrlr_key": "ckey1", 00:36:08.595 "method": "bdev_nvme_set_keys", 00:36:08.595 "req_id": 1 00:36:08.595 } 00:36:08.596 Got JSON-RPC error response 00:36:08.596 response: 00:36:08.596 { 00:36:08.596 "code": -13, 00:36:08.596 "message": "Permission denied" 00:36:08.596 } 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:08.596 12:50:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:09.529 12:50:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:10.462 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:10.462 rmmod nvme_tcp 00:36:10.720 rmmod nvme_fabrics 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 795918 ']' 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 795918 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 795918 ']' 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 795918 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 795918 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 795918' 00:36:10.720 killing process with pid 795918 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 795918 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 795918 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:10.720 12:50:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:13.254 12:50:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:13.254 12:50:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.191 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.191 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:14.191 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:15.130 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:15.389 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eTo /tmp/spdk.key-null.JhJ /tmp/spdk.key-sha256.loZ /tmp/spdk.key-sha384.kUn /tmp/spdk.key-sha512.xlu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:15.389 12:50:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.327 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:16.327 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:16.327 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:16.327 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:16.327 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:16.327 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:16.327 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:16.327 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:16.327 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:16.327 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:16.585 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:16.585 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:16.585 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:16.585 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:16.585 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:16.585 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:16.585 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:16.585 00:36:16.585 real 0m52.138s 00:36:16.585 user 0m49.413s 00:36:16.585 sys 0m6.270s 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.585 ************************************ 00:36:16.585 END TEST nvmf_auth_host 00:36:16.585 ************************************ 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.585 ************************************ 00:36:16.585 START TEST nvmf_digest 00:36:16.585 ************************************ 00:36:16.585 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:16.843 * Looking for test storage... 00:36:16.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:16.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.844 --rc genhtml_branch_coverage=1 00:36:16.844 --rc genhtml_function_coverage=1 00:36:16.844 --rc genhtml_legend=1 00:36:16.844 --rc geninfo_all_blocks=1 00:36:16.844 --rc geninfo_unexecuted_blocks=1 00:36:16.844 00:36:16.844 ' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:16.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.844 --rc genhtml_branch_coverage=1 00:36:16.844 --rc genhtml_function_coverage=1 00:36:16.844 --rc genhtml_legend=1 00:36:16.844 --rc geninfo_all_blocks=1 00:36:16.844 --rc geninfo_unexecuted_blocks=1 00:36:16.844 00:36:16.844 ' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:16.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.844 --rc genhtml_branch_coverage=1 00:36:16.844 --rc genhtml_function_coverage=1 00:36:16.844 --rc genhtml_legend=1 00:36:16.844 --rc geninfo_all_blocks=1 00:36:16.844 --rc geninfo_unexecuted_blocks=1 00:36:16.844 00:36:16.844 ' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:16.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.844 --rc genhtml_branch_coverage=1 00:36:16.844 --rc genhtml_function_coverage=1 00:36:16.844 --rc genhtml_legend=1 00:36:16.844 --rc geninfo_all_blocks=1 00:36:16.844 --rc geninfo_unexecuted_blocks=1 00:36:16.844 00:36:16.844 ' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:16.844 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:16.845 12:50:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:19.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:19.376 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:19.376 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:19.376 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.376 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:36:19.377 00:36:19.377 --- 10.0.0.2 ping statistics --- 00:36:19.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.377 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:36:19.377 00:36:19.377 --- 10.0.0.1 ping statistics --- 00:36:19.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.377 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.377 ************************************ 00:36:19.377 START TEST nvmf_digest_clean 00:36:19.377 ************************************ 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=805651 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 805651 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 805651 ']' 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.377 [2024-11-05 12:50:48.302834] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:19.377 [2024-11-05 12:50:48.302947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.377 [2024-11-05 12:50:48.372489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.377 [2024-11-05 12:50:48.414772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.377 [2024-11-05 12:50:48.414840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.377 [2024-11-05 12:50:48.414877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.377 [2024-11-05 12:50:48.414889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.377 [2024-11-05 12:50:48.414899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.377 [2024-11-05 12:50:48.415488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.377 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.633 null0 00:36:19.633 [2024-11-05 12:50:48.640541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.633 [2024-11-05 12:50:48.664782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=805671 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 805671 /var/tmp/bperf.sock 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 805671 ']' 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:19.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:19.634 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:19.634 [2024-11-05 12:50:48.712459] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:19.634 [2024-11-05 12:50:48.712536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805671 ] 00:36:19.634 [2024-11-05 12:50:48.779784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.634 [2024-11-05 12:50:48.829083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.891 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:19.891 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:19.891 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:19.891 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:19.891 12:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:20.201 12:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:20.201 12:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:20.458 nvme0n1 00:36:20.458 12:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:20.458 12:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.716 Running I/O for 2 seconds... 00:36:22.580 17762.00 IOPS, 69.38 MiB/s [2024-11-05T11:50:51.818Z] 18182.50 IOPS, 71.03 MiB/s 00:36:22.580 Latency(us) 00:36:22.580 [2024-11-05T11:50:51.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.580 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:22.580 nvme0n1 : 2.00 18202.30 71.10 0.00 0.00 7025.02 3349.62 17961.72 00:36:22.580 [2024-11-05T11:50:51.818Z] =================================================================================================================== 00:36:22.580 [2024-11-05T11:50:51.818Z] Total : 18202.30 71.10 0.00 0.00 7025.02 3349.62 17961.72 00:36:22.580 { 00:36:22.580 "results": [ 00:36:22.580 { 00:36:22.580 "job": "nvme0n1", 00:36:22.580 "core_mask": "0x2", 00:36:22.580 "workload": "randread", 00:36:22.580 "status": "finished", 00:36:22.580 "queue_depth": 128, 00:36:22.580 "io_size": 4096, 00:36:22.580 "runtime": 2.004856, 00:36:22.580 "iops": 18202.304803936044, 00:36:22.580 "mibps": 71.10275314037517, 00:36:22.580 "io_failed": 0, 00:36:22.580 "io_timeout": 0, 00:36:22.580 "avg_latency_us": 7025.018271956774, 00:36:22.580 "min_latency_us": 3349.617777777778, 00:36:22.580 "max_latency_us": 17961.71851851852 00:36:22.580 } 00:36:22.580 ], 00:36:22.580 "core_count": 1 00:36:22.580 } 00:36:22.580 12:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:22.580 12:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:22.580 12:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:22.580 12:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:22.580 12:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:22.580 | select(.opcode=="crc32c") 00:36:22.580 | "\(.module_name) \(.executed)"' 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 805671 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 805671 ']' 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 805671 00:36:22.838 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 805671 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 805671' 00:36:23.096 killing process with pid 805671 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 805671 00:36:23.096 Received shutdown signal, test time was about 2.000000 seconds 00:36:23.096 00:36:23.096 Latency(us) 00:36:23.096 [2024-11-05T11:50:52.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:23.096 [2024-11-05T11:50:52.334Z] =================================================================================================================== 00:36:23.096 [2024-11-05T11:50:52.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 805671 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:23.096 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=806187 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 806187 /var/tmp/bperf.sock 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 806187 ']' 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:23.097 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:23.355 [2024-11-05 12:50:52.350809] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:23.355 [2024-11-05 12:50:52.350901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806187 ] 00:36:23.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.355 Zero copy mechanism will not be used. 00:36:23.355 [2024-11-05 12:50:52.418295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.355 [2024-11-05 12:50:52.470193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.613 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:23.613 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:23.613 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:23.613 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:23.613 12:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:23.870 12:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:23.870 12:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:24.436 nvme0n1 00:36:24.436 12:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:24.436 12:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:24.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:24.436 Zero copy mechanism will not be used. 00:36:24.436 Running I/O for 2 seconds... 00:36:26.743 4403.00 IOPS, 550.38 MiB/s [2024-11-05T11:50:55.981Z] 4408.00 IOPS, 551.00 MiB/s 00:36:26.743 Latency(us) 00:36:26.743 [2024-11-05T11:50:55.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.743 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:26.743 nvme0n1 : 2.04 4319.45 539.93 0.00 0.00 3629.56 916.29 46991.74 00:36:26.743 [2024-11-05T11:50:55.981Z] =================================================================================================================== 00:36:26.743 [2024-11-05T11:50:55.981Z] Total : 4319.45 539.93 0.00 0.00 3629.56 916.29 46991.74 00:36:26.743 { 00:36:26.743 "results": [ 00:36:26.743 { 00:36:26.743 "job": "nvme0n1", 00:36:26.743 "core_mask": "0x2", 00:36:26.743 "workload": "randread", 00:36:26.743 "status": "finished", 00:36:26.743 "queue_depth": 16, 00:36:26.743 "io_size": 131072, 00:36:26.743 "runtime": 2.044707, 00:36:26.743 "iops": 4319.445279934974, 00:36:26.743 "mibps": 539.9306599918717, 00:36:26.743 "io_failed": 0, 00:36:26.743 "io_timeout": 0, 00:36:26.743 "avg_latency_us": 3629.5640150295226, 00:36:26.743 "min_latency_us": 916.2903703703704, 00:36:26.743 "max_latency_us": 46991.73925925926 00:36:26.743 } 00:36:26.743 ], 00:36:26.743 "core_count": 1 00:36:26.743 } 00:36:26.743 12:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:26.743 12:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:26.743 12:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:26.743 12:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:26.743 | select(.opcode=="crc32c") 00:36:26.743 | "\(.module_name) \(.executed)"' 00:36:26.743 12:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 806187 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 806187 ']' 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 806187 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 806187 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 806187' 00:36:27.001 killing process with pid 806187 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 806187 00:36:27.001 Received shutdown signal, test time was about 2.000000 seconds 00:36:27.001 00:36:27.001 Latency(us) 00:36:27.001 [2024-11-05T11:50:56.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.001 [2024-11-05T11:50:56.239Z] =================================================================================================================== 00:36:27.001 [2024-11-05T11:50:56.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.001 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 806187 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=806612 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 806612 /var/tmp/bperf.sock 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 806612 ']' 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:27.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:27.259 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:27.259 [2024-11-05 12:50:56.329831] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:27.259 [2024-11-05 12:50:56.329929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806612 ] 00:36:27.259 [2024-11-05 12:50:56.396966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.260 [2024-11-05 12:50:56.442324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.517 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:27.517 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:27.517 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:27.517 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:27.517 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:27.775 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:27.775 12:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:28.033 nvme0n1 00:36:28.033 12:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:28.033 12:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.291 Running I/O for 2 seconds... 00:36:30.156 18369.00 IOPS, 71.75 MiB/s [2024-11-05T11:50:59.394Z] 18316.50 IOPS, 71.55 MiB/s 00:36:30.156 Latency(us) 00:36:30.156 [2024-11-05T11:50:59.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.156 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:30.156 nvme0n1 : 2.01 18316.41 71.55 0.00 0.00 6972.23 2779.21 11311.03 00:36:30.156 [2024-11-05T11:50:59.394Z] =================================================================================================================== 00:36:30.156 [2024-11-05T11:50:59.394Z] Total : 18316.41 71.55 0.00 0.00 6972.23 2779.21 11311.03 00:36:30.156 { 00:36:30.156 "results": [ 00:36:30.156 { 00:36:30.156 "job": "nvme0n1", 00:36:30.156 "core_mask": "0x2", 00:36:30.156 "workload": "randwrite", 00:36:30.156 "status": "finished", 00:36:30.156 "queue_depth": 128, 00:36:30.156 "io_size": 4096, 00:36:30.156 "runtime": 2.008745, 00:36:30.156 "iops": 18316.411490756665, 00:36:30.156 "mibps": 71.54848238576822, 00:36:30.156 "io_failed": 0, 00:36:30.156 "io_timeout": 0, 00:36:30.156 "avg_latency_us": 6972.233447787473, 00:36:30.156 "min_latency_us": 2779.211851851852, 00:36:30.156 "max_latency_us": 11311.028148148149 00:36:30.156 } 00:36:30.156 ], 00:36:30.156 "core_count": 1 00:36:30.156 } 00:36:30.156 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:30.156 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:30.156 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:30.156 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:30.156 | select(.opcode=="crc32c") 00:36:30.156 | "\(.module_name) \(.executed)"' 00:36:30.156 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 806612 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 806612 ']' 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 806612 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:30.414 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 806612 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 806612' 00:36:30.672 killing process with pid 806612 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 806612 00:36:30.672 Received shutdown signal, test time was about 2.000000 seconds 00:36:30.672 00:36:30.672 Latency(us) 00:36:30.672 [2024-11-05T11:50:59.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.672 [2024-11-05T11:50:59.910Z] =================================================================================================================== 00:36:30.672 [2024-11-05T11:50:59.910Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 806612 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=807012 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 807012 /var/tmp/bperf.sock 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 807012 ']' 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:30.672 12:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:30.930 [2024-11-05 12:50:59.914060] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:30.930 [2024-11-05 12:50:59.914139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807012 ] 00:36:30.930 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:30.930 Zero copy mechanism will not be used. 00:36:30.930 [2024-11-05 12:50:59.980986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.930 [2024-11-05 12:51:00.029630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.930 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:30.930 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:36:30.930 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:30.930 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:30.930 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:31.495 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.495 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:31.753 nvme0n1 00:36:31.753 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:31.753 12:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:32.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:32.010 Zero copy mechanism will not be used. 00:36:32.010 Running I/O for 2 seconds... 00:36:33.875 5807.00 IOPS, 725.88 MiB/s [2024-11-05T11:51:03.113Z] 5793.50 IOPS, 724.19 MiB/s 00:36:33.875 Latency(us) 00:36:33.875 [2024-11-05T11:51:03.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.875 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:33.875 nvme0n1 : 2.00 5793.11 724.14 0.00 0.00 2755.07 1650.54 12621.75 00:36:33.875 [2024-11-05T11:51:03.113Z] =================================================================================================================== 00:36:33.875 [2024-11-05T11:51:03.113Z] Total : 5793.11 724.14 0.00 0.00 2755.07 1650.54 12621.75 00:36:33.875 { 00:36:33.875 "results": [ 00:36:33.875 { 00:36:33.875 "job": "nvme0n1", 00:36:33.875 "core_mask": "0x2", 00:36:33.875 "workload": "randwrite", 00:36:33.875 "status": "finished", 00:36:33.875 "queue_depth": 16, 00:36:33.875 "io_size": 131072, 00:36:33.875 "runtime": 2.004105, 00:36:33.875 "iops": 5793.109642458853, 00:36:33.875 "mibps": 724.1387053073566, 00:36:33.875 "io_failed": 0, 00:36:33.875 "io_timeout": 0, 00:36:33.875 "avg_latency_us": 2755.067835008135, 00:36:33.875 "min_latency_us": 1650.5362962962963, 00:36:33.875 "max_latency_us": 12621.748148148148 00:36:33.875 } 00:36:33.875 ], 00:36:33.875 "core_count": 1 00:36:33.875 } 00:36:33.875 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:33.875 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:33.875 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:33.875 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:33.875 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:33.875 | select(.opcode=="crc32c") 00:36:33.875 | "\(.module_name) \(.executed)"' 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 807012 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 807012 ']' 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 807012 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 807012 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 807012' 00:36:34.132 killing process with pid 807012 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 807012 00:36:34.132 Received shutdown signal, test time was about 2.000000 seconds 00:36:34.132 00:36:34.132 Latency(us) 00:36:34.132 [2024-11-05T11:51:03.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.132 [2024-11-05T11:51:03.370Z] =================================================================================================================== 00:36:34.132 [2024-11-05T11:51:03.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:34.132 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 807012 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 805651 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 805651 ']' 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 805651 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 805651 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 805651' 00:36:34.389 killing process with pid 805651 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 805651 00:36:34.389 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 805651 00:36:34.647 00:36:34.647 real 0m15.502s 00:36:34.647 user 0m30.804s 00:36:34.647 sys 0m4.238s 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:34.647 ************************************ 00:36:34.647 END TEST nvmf_digest_clean 00:36:34.647 ************************************ 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:34.647 ************************************ 00:36:34.647 START TEST nvmf_digest_error 00:36:34.647 ************************************ 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=807676 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 807676 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 807676 ']' 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.647 12:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.647 [2024-11-05 12:51:03.860769] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:34.647 [2024-11-05 12:51:03.860891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.905 [2024-11-05 12:51:03.937471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.905 [2024-11-05 12:51:03.982691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.905 [2024-11-05 12:51:03.982765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.905 [2024-11-05 12:51:03.982794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.905 [2024-11-05 12:51:03.982806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.905 [2024-11-05 12:51:03.982816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.905 [2024-11-05 12:51:03.983370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.905 [2024-11-05 12:51:04.128170] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.905 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.163 null0 00:36:35.163 [2024-11-05 12:51:04.244716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.163 [2024-11-05 12:51:04.268991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=807705 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 807705 /var/tmp/bperf.sock 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 807705 ']' 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:35.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:35.163 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.163 [2024-11-05 12:51:04.316477] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:35.163 [2024-11-05 12:51:04.316538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807705 ] 00:36:35.163 [2024-11-05 12:51:04.383598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.421 [2024-11-05 12:51:04.430451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.421 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:35.421 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:35.421 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.421 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.679 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:35.679 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.679 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.679 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.679 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.679 12:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:36.244 nvme0n1 00:36:36.244 12:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:36.244 12:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.244 12:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:36.244 12:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.244 12:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:36.244 12:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.244 Running I/O for 2 seconds... 00:36:36.245 [2024-11-05 12:51:05.475432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.245 [2024-11-05 12:51:05.475486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.245 [2024-11-05 12:51:05.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.503 [2024-11-05 12:51:05.489696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.503 [2024-11-05 12:51:05.489727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.503 [2024-11-05 12:51:05.489759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.503 [2024-11-05 12:51:05.504728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.503 [2024-11-05 12:51:05.504760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.503 [2024-11-05 12:51:05.504778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.503 [2024-11-05 12:51:05.516646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.503 [2024-11-05 12:51:05.516676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.503 [2024-11-05 12:51:05.516709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.503 [2024-11-05 12:51:05.529504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.503 [2024-11-05 12:51:05.529534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.503 [2024-11-05 12:51:05.529566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.503 [2024-11-05 12:51:05.543349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.503 [2024-11-05 12:51:05.543393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.503 [2024-11-05 12:51:05.543410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.503 [2024-11-05 12:51:05.556187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.503 [2024-11-05 12:51:05.556219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.503 [2024-11-05 12:51:05.556236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.569508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.569554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.569571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.583457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.583515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.583533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.595367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.595396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.595428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.608127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.608172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.608189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.620789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.620818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.620850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.635486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.635516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.635550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.650954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.650985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.651002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.662033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.662063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.662081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.675209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.675238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.675270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.688346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.688374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.688406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.701098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.701128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.701161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.714447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.714478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.714495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.726940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.726969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.727001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.504 [2024-11-05 12:51:05.739719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.504 [2024-11-05 12:51:05.739751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.504 [2024-11-05 12:51:05.739769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.762 [2024-11-05 12:51:05.752300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.762 [2024-11-05 12:51:05.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.762 [2024-11-05 12:51:05.752361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.762 [2024-11-05 12:51:05.765257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.762 [2024-11-05 12:51:05.765286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.762 [2024-11-05 12:51:05.765318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.762 [2024-11-05 12:51:05.779813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.762 [2024-11-05 12:51:05.779844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.762 [2024-11-05 12:51:05.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.762 [2024-11-05 12:51:05.793428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.762 [2024-11-05 12:51:05.793460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.793478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.804867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.804898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.804924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.818918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.818949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.818982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.832572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.832603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.844535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.844566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.844583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.860029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.860062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.860079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.874680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.874711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.874729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.886373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.886405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.886421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.900813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.900843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.900884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.917943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.917974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.917991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.928165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.928203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.928221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.943122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.943167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.943184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.958761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.958790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.958823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.971630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.971660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.971691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.983572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.983616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.983632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.763 [2024-11-05 12:51:05.996958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:36.763 [2024-11-05 12:51:05.996991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.763 [2024-11-05 12:51:05.997009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.021 [2024-11-05 12:51:06.013793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.021 [2024-11-05 12:51:06.013824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.013856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.029978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.030018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.030035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.043196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.043242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.043268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.054678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.054706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.054738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.069663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.069693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.069725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.080642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.080672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.080690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.094775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.094806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.094823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.110817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.110849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.110875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.123455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.123486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.123503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.140838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.140893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.154318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.154350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.154368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.170218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.170257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.170276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.181539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.181570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.181587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.196254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.196285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.196302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.207186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.207217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.207235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.221892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.221939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.221957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.235852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.235907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.235925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.022 [2024-11-05 12:51:06.247896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.022 [2024-11-05 12:51:06.247928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.022 [2024-11-05 12:51:06.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.280 [2024-11-05 12:51:06.264928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.280 [2024-11-05 12:51:06.264960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.280 [2024-11-05 12:51:06.264978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.280 [2024-11-05 12:51:06.280906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.280 [2024-11-05 12:51:06.280936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.280 [2024-11-05 12:51:06.280968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.280 [2024-11-05 12:51:06.296648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.280 [2024-11-05 12:51:06.296683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.280 [2024-11-05 12:51:06.296701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.280 [2024-11-05 12:51:06.313255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.280 [2024-11-05 12:51:06.313302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.280 [2024-11-05 12:51:06.313320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.328921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.328952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.328969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.343420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.343451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.343468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.354247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.354275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.354306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.369506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.369534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.369566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.381444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.381474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.381507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.394894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.394925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.407813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.407857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.407890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.420741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.420770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.420802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.434025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.434055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.434087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.446221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.446250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.446281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 18473.00 IOPS, 72.16 MiB/s [2024-11-05T11:51:06.519Z] [2024-11-05 12:51:06.460181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.460210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.460241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.472452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.472480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.472513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.487114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.487156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.487172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.500515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.500565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.500582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.281 [2024-11-05 12:51:06.514015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.281 [2024-11-05 12:51:06.514047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.281 [2024-11-05 12:51:06.514064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.529298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.529342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.529359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.540227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.540255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.540287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.554128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.554173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.554190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.568390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.568418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.568449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.582663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.582692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.582725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.596015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.596045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.596061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.611579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.611607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.611638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.621956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.621984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.622015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.637965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.637994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.638032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.652040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.539 [2024-11-05 12:51:06.652071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.539 [2024-11-05 12:51:06.652088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.539 [2024-11-05 12:51:06.666939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.666967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.666999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.684180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.684208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.684223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.695113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.695141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.708769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.708798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.708828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.722350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.722378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.722408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.737683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.737714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.737731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.748774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.748803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.748834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.540 [2024-11-05 12:51:06.765823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.540 [2024-11-05 12:51:06.765880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.540 [2024-11-05 12:51:06.765899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.781774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.781821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.781838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.796661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.796691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.796722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.808249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.808277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.808308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.822192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.822220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.822250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.839265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.839297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.855025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.855055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.855087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.870638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.870666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.870696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.882693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.882724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.882741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.896253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.896298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.896315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.909647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.909678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.909696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.922930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.922960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.922977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.935615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.935643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.935674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.948482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.948511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.948543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.963662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.963691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.963722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.975277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.975322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.975338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:06.989724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:06.989752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:06.989783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:07.003714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:07.003745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:07.003782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:07.018013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.798 [2024-11-05 12:51:07.018046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.798 [2024-11-05 12:51:07.018064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.798 [2024-11-05 12:51:07.029518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:37.799 [2024-11-05 12:51:07.029550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.799 [2024-11-05 12:51:07.029585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.045973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.046024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.060870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.060911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.060943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.075558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.075590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.075607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.086836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.086888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.086919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.101179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.101208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.101238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.117757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.117788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.117819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.133424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.133458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.133490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.148997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.149028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.149060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.165361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.165390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.165421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.177723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.177768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.177785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.189436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.189464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.189494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.204312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.204340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.204371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.220370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.220401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.220434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.235324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.235353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.235369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.250118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.250148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.250171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.261353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.261381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.261413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.276395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.276423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.276454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.057 [2024-11-05 12:51:07.291109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.057 [2024-11-05 12:51:07.291145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.057 [2024-11-05 12:51:07.291175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.306217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.306250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.306268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.322671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.322703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.322730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.334643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.334689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.334706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.350290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.350320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.350351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.363197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.363229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.363246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.376123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.376186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.376203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.389058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.389089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.389106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.404424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.404454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.404472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.419472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.419518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.419537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.430783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.430813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.430830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 [2024-11-05 12:51:07.445799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.445828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.445876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 18351.00 IOPS, 71.68 MiB/s [2024-11-05T11:51:07.554Z] [2024-11-05 12:51:07.460513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x222ee40) 00:36:38.316 [2024-11-05 12:51:07.460544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.316 [2024-11-05 12:51:07.460562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.316 00:36:38.316 Latency(us) 00:36:38.316 [2024-11-05T11:51:07.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.316 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:38.316 nvme0n1 : 2.01 18343.25 71.65 0.00 0.00 6970.02 3398.16 21942.42 00:36:38.316 [2024-11-05T11:51:07.554Z] =================================================================================================================== 00:36:38.316 [2024-11-05T11:51:07.554Z] Total : 18343.25 71.65 0.00 0.00 6970.02 3398.16 21942.42 00:36:38.316 { 00:36:38.316 "results": [ 00:36:38.316 { 00:36:38.316 "job": "nvme0n1", 00:36:38.316 "core_mask": "0x2", 00:36:38.316 "workload": "randread", 00:36:38.316 "status": "finished", 00:36:38.316 "queue_depth": 128, 00:36:38.316 "io_size": 4096, 00:36:38.316 "runtime": 2.007823, 00:36:38.316 "iops": 18343.250376153675, 00:36:38.316 "mibps": 71.6533217818503, 00:36:38.316 "io_failed": 0, 00:36:38.316 "io_timeout": 0, 00:36:38.316 "avg_latency_us": 6970.022588831568, 00:36:38.316 "min_latency_us": 3398.162962962963, 00:36:38.316 "max_latency_us": 21942.423703703702 00:36:38.316 } 00:36:38.316 ], 00:36:38.316 "core_count": 1 00:36:38.316 } 00:36:38.316 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:38.316 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:38.316 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:38.316 | .driver_specific 00:36:38.316 | .nvme_error 00:36:38.316 | .status_code 00:36:38.316 | .command_transient_transport_error' 00:36:38.316 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 807705 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 807705 ']' 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 807705 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 807705 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 807705' 00:36:38.575 killing process with pid 807705 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 807705 00:36:38.575 Received shutdown signal, test time was about 2.000000 seconds 00:36:38.575 00:36:38.575 Latency(us) 00:36:38.575 [2024-11-05T11:51:07.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.575 [2024-11-05T11:51:07.813Z] =================================================================================================================== 00:36:38.575 [2024-11-05T11:51:07.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:38.575 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 807705 00:36:38.832 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:38.832 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:38.832 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:38.832 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=808426 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 808426 /var/tmp/bperf.sock 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 808426 ']' 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:38.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:38.833 12:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:38.833 [2024-11-05 12:51:08.021304] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:38.833 [2024-11-05 12:51:08.021382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808426 ] 00:36:38.833 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:38.833 Zero copy mechanism will not be used. 00:36:39.090 [2024-11-05 12:51:08.092680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.090 [2024-11-05 12:51:08.139575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.090 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:39.090 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:39.090 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:39.090 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:39.348 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:39.348 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.348 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:39.348 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.348 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.348 12:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:39.944 nvme0n1 00:36:39.944 12:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:39.944 12:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.944 12:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:39.944 12:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.944 12:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:39.944 12:51:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:39.944 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:39.944 Zero copy mechanism will not be used. 00:36:39.944 Running I/O for 2 seconds... 00:36:39.944 [2024-11-05 12:51:09.142478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:39.944 [2024-11-05 12:51:09.142544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.945 [2024-11-05 12:51:09.142565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:39.945 [2024-11-05 12:51:09.148610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:39.945 [2024-11-05 12:51:09.148644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.945 [2024-11-05 12:51:09.148663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:39.945 [2024-11-05 12:51:09.156098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:39.945 [2024-11-05 12:51:09.156129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.945 [2024-11-05 12:51:09.156147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:39.945 [2024-11-05 12:51:09.162983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:39.945 [2024-11-05 12:51:09.163015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:39.945 [2024-11-05 12:51:09.163033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.228 [2024-11-05 12:51:09.167191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.228 [2024-11-05 12:51:09.167223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.228 [2024-11-05 12:51:09.167241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.228 [2024-11-05 12:51:09.172370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.228 [2024-11-05 12:51:09.172400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.228 [2024-11-05 12:51:09.172417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.228 [2024-11-05 12:51:09.177028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.228 [2024-11-05 12:51:09.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.228 [2024-11-05 12:51:09.177087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.228 [2024-11-05 12:51:09.182551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.228 [2024-11-05 12:51:09.182582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.228 [2024-11-05 12:51:09.182599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.228 [2024-11-05 12:51:09.187021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.228 [2024-11-05 12:51:09.187052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.228 [2024-11-05 12:51:09.187085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.228 [2024-11-05 12:51:09.192642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.228 [2024-11-05 12:51:09.192675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.192699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.197762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.197794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.197813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.204974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.205007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.205025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.212561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.212590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.212625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.220518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.220550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.220567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.228814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.228846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.228890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.236590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.236621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.236654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.243353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.243383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.243415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.250608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.250640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.250658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.258073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.258105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.258123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.263009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.263040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.263058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.268623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.268667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.268683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.275288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.275332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.275349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.281743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.281789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.281807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.287158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.287187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.287217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.292021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.292066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.292083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.297132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.297179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.297197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.302524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.302556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.302593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.307692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.307723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.307741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.312257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.312287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.312319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.317149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.317198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.317215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.321931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.321962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.321979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.326588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.326632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.326649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.331202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.331233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.331250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.335844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.335882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.335908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.340491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.340536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.340554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.345003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.345038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.345055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.229 [2024-11-05 12:51:09.349630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.229 [2024-11-05 12:51:09.349661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.229 [2024-11-05 12:51:09.349678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.354178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.354207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.354223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.359004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.359034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.359051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.363997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.364026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.364057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.368977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.369006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.369038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.373528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.373573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.373591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.378217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.378261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.378278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.382964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.383007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.383024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.387732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.387761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.387777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.393678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.393708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.393741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.399482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.399515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.399534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.406716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.406747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.406781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.414533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.414562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.414592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.421177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.421233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.421250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.426768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.426800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.426819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.433050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.433081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.433098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.438696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.438727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.438766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.444026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.444071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.444088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.449286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.449316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.449349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.454550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.454581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.454598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.459625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.459655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.459673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.230 [2024-11-05 12:51:09.465009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.230 [2024-11-05 12:51:09.465040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.230 [2024-11-05 12:51:09.465058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.469898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.469928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.469961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.474723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.474753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.474786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.480096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.480152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.480171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.485798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.485853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.491053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.491085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.491103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.496335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.496367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.500055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.500086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.500103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.503552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.503595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.503611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.508345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.508375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.508408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.512907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.512936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.512968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.517565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.517594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.517625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.523191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.523234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.523251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.530644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.530688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.530705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.537662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.537709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.537727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.543554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.543587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.543605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.548592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.548623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.548640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.554432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.554465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.554483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.559839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.559895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.559928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.564445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.564475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.564492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.568991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.569021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.569038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.573581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.573611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.573633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.490 [2024-11-05 12:51:09.578194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.490 [2024-11-05 12:51:09.578224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.490 [2024-11-05 12:51:09.578241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.582913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.582942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.582973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.587408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.587451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.587467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.592123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.592151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.592167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.596808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.596838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.596855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.601361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.601392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.601409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.606036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.606066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.610592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.610622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.610641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.615387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.615419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.615436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.620222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.620253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.620271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.625473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.625504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.625522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.631310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.631341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.631359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.636751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.636794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.636811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.642297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.642328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.642345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.648077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.648107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.648139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.654002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.654036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.654055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.660305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.660353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.660377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.666080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.666112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.666130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.672238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.672271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.672290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.678501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.678533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.678550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.684189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.684221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.684239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.687902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.687933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.687951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.691297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.691327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.691345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.695256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.695286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.695304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.700782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.700814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.700831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.706042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.706078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.706112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.713228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.713260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.713278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.491 [2024-11-05 12:51:09.720942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.491 [2024-11-05 12:51:09.720974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.491 [2024-11-05 12:51:09.721008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.492 [2024-11-05 12:51:09.727933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.492 [2024-11-05 12:51:09.727966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.492 [2024-11-05 12:51:09.727985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.734255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.734287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.734305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.740890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.740935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.740953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.743958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.743987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.744004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.748831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.748884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.748913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.753469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.753497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.753526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.758775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.758821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.758838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.766029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.766058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.766089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.773548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.773579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.773612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.779705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.779735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.779751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.786367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.786399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.786416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.792139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.792170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.792188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.798141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.798187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.798204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.804541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.804573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.804590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.810911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.810958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.810981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.816917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.816948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.816980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.822823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.749 [2024-11-05 12:51:09.822854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.749 [2024-11-05 12:51:09.822895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.749 [2024-11-05 12:51:09.828569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.828599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.828632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.834495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.834527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.834545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.840334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.840364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.840380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.846073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.846104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.846121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.851636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.851666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.851698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.857486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.857517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.857534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.863353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.863391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.863409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.869166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.869198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.869216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.874256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.874287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.874305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.880099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.880144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.880160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.885929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.885975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.885992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.891646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.891691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.891707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.897496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.897526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.897560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.903473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.903505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.903523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.909327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.909374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.909391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.915062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.915094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.921800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.921846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.921870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.928300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.928348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.928366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.933581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.933613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.933631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.938766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.938796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.938827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.944286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.944316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.944334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.949092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.949121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.949152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.954165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.954203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.954235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.960409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.960458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.960476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.968129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.968175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.968192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.974347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.974377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.974410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.980680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.980724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.980740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:40.750 [2024-11-05 12:51:09.986740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:40.750 [2024-11-05 12:51:09.986772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:40.750 [2024-11-05 12:51:09.986789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:09.992162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:09.992191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:09.992208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:09.997511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:09.997543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:09.997561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.003051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.003086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.003104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.007822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.007876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.007895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.013451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.013486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.013520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.018482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.018514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.018533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.023603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.023635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.023653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.028807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.028840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.028865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.034504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.034548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.034566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.038854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.038901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.038924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.044556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.009 [2024-11-05 12:51:10.044589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.009 [2024-11-05 12:51:10.044607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.009 [2024-11-05 12:51:10.052084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.052117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.052135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.059351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.059383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.059414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.067646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.067678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.067696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.073978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.074009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.074026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.079036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.079099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.084611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.084641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.084674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.089535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.089566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.089599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.094608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.094637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.094668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.099875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.099907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.099925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.105452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.105484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.105502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.110924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.110961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.110981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.116271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.116302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.116320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.122325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.122356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.122388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.128104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.128136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.128153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.133916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.133948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.133980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.010 5548.00 IOPS, 693.50 MiB/s [2024-11-05T11:51:10.248Z] [2024-11-05 12:51:10.140916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.140966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.140990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.147158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.147191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.147208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.153188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.153218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.153252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.159305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.159337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.159354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.163684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.163716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.163734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.169532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.169564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.169596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.176574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.176618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.176634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.184172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.184203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.184221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.190317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.190349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.190366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.195594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.195625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.195643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.200411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.200442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.010 [2024-11-05 12:51:10.200474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.010 [2024-11-05 12:51:10.205825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.010 [2024-11-05 12:51:10.205856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.205882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.211231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.211261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.211301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.216058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.216089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.216106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.221234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.221263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.221295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.226041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.226071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.226089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.230967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.230997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.231015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.235779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.235810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.235826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.240358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.240402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.240419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.011 [2024-11-05 12:51:10.245957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.011 [2024-11-05 12:51:10.245988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.011 [2024-11-05 12:51:10.246005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.251771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.251803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.251820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.256660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.256690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.256707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.261543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.261572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.261590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.266136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.266165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.266182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.270811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.270841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.270858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.275314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.275344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.275360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.279984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.280013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.280030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.269 [2024-11-05 12:51:10.284874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.269 [2024-11-05 12:51:10.284924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.269 [2024-11-05 12:51:10.284941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.289500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.289544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.289561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.294283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.294313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.294336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.299062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.299107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.299123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.303971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.304001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.304018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.308767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.308800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.308832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.313949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.313979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.313996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.318840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.318879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.318897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.323451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.323480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.323498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.328035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.328065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.328082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.332674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.332704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.332721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.337222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.337258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.337276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.341781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.341810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.341827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.346308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.346355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.350875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.350922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.355469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.355500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.355517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.360042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.360071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.360088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.364747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.364776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.364793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.369416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.369445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.369462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.374132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.374173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.374190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.379362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.379390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.379422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.384824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.384854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.384881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.389646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.389691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.389708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.394966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.394997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.395014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.399726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.399755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.399788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.404303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.404333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.404350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.408824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.408881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.408902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.270 [2024-11-05 12:51:10.413534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.270 [2024-11-05 12:51:10.413564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.270 [2024-11-05 12:51:10.413581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.418354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.418394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.418418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.422970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.423001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.423019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.427665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.427695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.427711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.433237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.433268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.433300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.438533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.438565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.438584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.443438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.443473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.443491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.448248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.448287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.448304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.452993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.453024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.453041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.457657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.457688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.457705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.462265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.462312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.462331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.466936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.466967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.466985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.472058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.472089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.472107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.477009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.477041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.477060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.481692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.481722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.481740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.486495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.486526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.486542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.491555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.491585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.491619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.496760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.496790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.496823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.500161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.500193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.500210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.504054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.504084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.271 [2024-11-05 12:51:10.508097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.271 [2024-11-05 12:51:10.508128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.271 [2024-11-05 12:51:10.508145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.511413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.511445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.511462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.515604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.515634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.515652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.519617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.519692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.524029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.524059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.524091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.528473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.528519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.528545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.533335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.533377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.533395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.539275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.539315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.539334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.543675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.543706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.543724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.549890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.549921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.549952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.557101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.557131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.530 [2024-11-05 12:51:10.557162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.530 [2024-11-05 12:51:10.565134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.530 [2024-11-05 12:51:10.565182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.565200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.573070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.573101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.573134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.579611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.579644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.579661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.585327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.585360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.585378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.590377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.590422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.590438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.595521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.595566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.595583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.600329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.600360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.600392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.605096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.605126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.605143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.609925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.609970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.609987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.614790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.614834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.614850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.619796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.619856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.624678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.624722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.624739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.629615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.629644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.629676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.634336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.634367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.634390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.639947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.639978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.640010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.647032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.647077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.647094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.654217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.654249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.654266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.660057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.660087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.660103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.665851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.665887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.665920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.671876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.671908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.671947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.675997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.676028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.676063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.679508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.679540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.679557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.685246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.685284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.685302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.690370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.690419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.690437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.696213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.696245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.701849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.701888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.701906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.707895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.707941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.531 [2024-11-05 12:51:10.707958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.531 [2024-11-05 12:51:10.713965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.531 [2024-11-05 12:51:10.713996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.714029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.719506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.719537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.719570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.725413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.725443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.725460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.731162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.731208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.731225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.736989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.737019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.737053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.742589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.742620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.742652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.748355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.748387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.748405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.754387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.754418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.754436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.760298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.760331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.760349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.532 [2024-11-05 12:51:10.766210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.532 [2024-11-05 12:51:10.766242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.532 [2024-11-05 12:51:10.766260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.772516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.772547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.772565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.778854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.778893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.778910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.783845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.783882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.783922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.788991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.789039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.792218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.792247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.792281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.796613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.796644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.796661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.801243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.801275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.801293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.805975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.806005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.806036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.810576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.810623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.810640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.815104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.815133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.815166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.819641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.819688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.824276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.824306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.824323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.828792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.828821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.828838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.790 [2024-11-05 12:51:10.833485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.790 [2024-11-05 12:51:10.833531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.790 [2024-11-05 12:51:10.833548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.837958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.837987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.838004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.842659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.842688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.842705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.847226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.847256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.847273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.851841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.851876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.851920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.856421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.856450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.856483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.861550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.861581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.861604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.867030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.867078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.867096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.871951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.871981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.871998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.876867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.876897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.876914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.881630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.881660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.881677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.886154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.886194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.886211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.890853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.890888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.890921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.895624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.895653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.895670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.900339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.900369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.900386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.904970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.905005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.905023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.909608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.909639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.909656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.914357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.914387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.914404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.919016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.919044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.919077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.924669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.924717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.924735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.929591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.929623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.929640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.934375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.934405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.934422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.939170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.939201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.939218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.944746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.944776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.944808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.949191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.791 [2024-11-05 12:51:10.949223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.791 [2024-11-05 12:51:10.949241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.791 [2024-11-05 12:51:10.956793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.956823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.956854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.962466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.962495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.962526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.968252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.968282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.968313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.974035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.974065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.974097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.980196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.980239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.980256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.986092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.986122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.986138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.992072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.992102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.992133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:10.997989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:10.998018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:10.998056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:11.003774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:11.003818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:11.003836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:11.009607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:11.009637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:11.009668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:11.015105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:11.015135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:11.015167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:11.020360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:11.020390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:11.020408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:11.025094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:11.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:11.025140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:41.792 [2024-11-05 12:51:11.030158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:41.792 [2024-11-05 12:51:11.030189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:41.792 [2024-11-05 12:51:11.030206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.035279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.035310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.050 [2024-11-05 12:51:11.035327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.040094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.040139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.050 [2024-11-05 12:51:11.040158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.044577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.044611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.050 [2024-11-05 12:51:11.044644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.049134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.049163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.050 [2024-11-05 12:51:11.049195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.053845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.053883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.050 [2024-11-05 12:51:11.053916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.058475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.058503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.050 [2024-11-05 12:51:11.058536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.050 [2024-11-05 12:51:11.063029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.050 [2024-11-05 12:51:11.063058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.063089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.068162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.068205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.068223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.072052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.072081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.072096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.076690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.076733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.076749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.081147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.081176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.081207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.085668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.085696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.085728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.090052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.090082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.090099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.094607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.094656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.094673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.099207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.099251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.099268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.103820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.103849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.103888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.108413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.108456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.108472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.113087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.113130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.113146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.117651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.117678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.117694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.122248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.122296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.122312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.126880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.126907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.126939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.132064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.132093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.132126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:42.051 [2024-11-05 12:51:11.138721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x198c6a0) 00:36:42.051 [2024-11-05 12:51:11.138750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:42.051 [2024-11-05 12:51:11.138781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:42.051 5813.00 IOPS, 726.62 MiB/s 00:36:42.051 Latency(us) 00:36:42.051 [2024-11-05T11:51:11.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.051 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:42.051 nvme0n1 : 2.00 5811.74 726.47 0.00 0.00 2748.90 700.87 8835.22 00:36:42.051 [2024-11-05T11:51:11.289Z] =================================================================================================================== 00:36:42.051 [2024-11-05T11:51:11.289Z] Total : 5811.74 726.47 0.00 0.00 2748.90 700.87 8835.22 00:36:42.051 { 00:36:42.051 "results": [ 00:36:42.051 { 00:36:42.051 "job": "nvme0n1", 00:36:42.051 "core_mask": "0x2", 00:36:42.051 "workload": "randread", 00:36:42.051 "status": "finished", 00:36:42.051 "queue_depth": 16, 00:36:42.051 "io_size": 131072, 00:36:42.051 "runtime": 2.003187, 00:36:42.051 "iops": 5811.738993913199, 00:36:42.051 "mibps": 726.4673742391499, 00:36:42.051 "io_failed": 0, 00:36:42.051 "io_timeout": 0, 00:36:42.051 "avg_latency_us": 2748.9021242372764, 00:36:42.051 "min_latency_us": 700.8711111111111, 00:36:42.051 "max_latency_us": 8835.223703703703 00:36:42.051 } 00:36:42.051 ], 00:36:42.051 "core_count": 1 00:36:42.051 } 00:36:42.051 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:42.051 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:42.051 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:42.051 | .driver_specific 00:36:42.051 | .nvme_error 00:36:42.051 | .status_code 00:36:42.051 | .command_transient_transport_error' 00:36:42.051 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 808426 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 808426 ']' 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 808426 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 808426 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 808426' 00:36:42.309 killing process with pid 808426 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 808426 00:36:42.309 Received shutdown signal, test time was about 2.000000 seconds 00:36:42.309 00:36:42.309 Latency(us) 00:36:42.309 [2024-11-05T11:51:11.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.309 [2024-11-05T11:51:11.547Z] =================================================================================================================== 00:36:42.309 [2024-11-05T11:51:11.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:42.309 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 808426 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=809122 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 809122 /var/tmp/bperf.sock 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 809122 ']' 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:42.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:42.568 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:42.568 [2024-11-05 12:51:11.705822] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:42.568 [2024-11-05 12:51:11.705947] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809122 ] 00:36:42.568 [2024-11-05 12:51:11.773512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:42.826 [2024-11-05 12:51:11.821536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.826 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:42.826 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:42.826 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:42.826 12:51:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:43.084 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:43.084 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.084 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.084 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.084 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:43.084 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:43.648 nvme0n1 00:36:43.648 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:43.648 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.648 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:43.648 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.648 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:43.648 12:51:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:43.648 Running I/O for 2 seconds... 00:36:43.648 [2024-11-05 12:51:12.818702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f6458 00:36:43.648 [2024-11-05 12:51:12.819534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.648 [2024-11-05 12:51:12.819575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:43.648 [2024-11-05 12:51:12.830650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166feb58 00:36:43.648 [2024-11-05 12:51:12.831442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.648 [2024-11-05 12:51:12.831486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:36:43.648 [2024-11-05 12:51:12.844533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166ee5c8 00:36:43.648 [2024-11-05 12:51:12.845636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.648 [2024-11-05 12:51:12.845682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:43.649 [2024-11-05 12:51:12.858932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f1868 00:36:43.649 [2024-11-05 12:51:12.860683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.649 [2024-11-05 12:51:12.860726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:43.649 [2024-11-05 12:51:12.871482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166e99d8 00:36:43.649 [2024-11-05 12:51:12.873257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.649 [2024-11-05 12:51:12.873300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:43.649 [2024-11-05 12:51:12.880585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166eb328 00:36:43.649 [2024-11-05 12:51:12.881614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.649 [2024-11-05 12:51:12.881657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.893608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f0ff8 00:36:43.907 [2024-11-05 12:51:12.894746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.894788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.905695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166ea680 00:36:43.907 [2024-11-05 12:51:12.906529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.906571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.916987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166eee38 00:36:43.907 [2024-11-05 12:51:12.917782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.917823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.931590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166e1710 00:36:43.907 [2024-11-05 12:51:12.932895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.932940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.944338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f0ff8 00:36:43.907 [2024-11-05 12:51:12.945757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.945799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.953947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166e9168 00:36:43.907 [2024-11-05 12:51:12.954748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.966542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166fbcf0 00:36:43.907 [2024-11-05 12:51:12.967530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.967572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.979241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f57b0 00:36:43.907 [2024-11-05 12:51:12.980379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.980423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:12.991416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166e4578 00:36:43.907 [2024-11-05 12:51:12.992577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:12.992620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.003696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166fa3a0 00:36:43.907 [2024-11-05 12:51:13.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.004923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.016106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166ef6a8 00:36:43.907 [2024-11-05 12:51:13.017080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.017125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.027612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166e1710 00:36:43.907 [2024-11-05 12:51:13.029210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.029239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.038059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f7da8 00:36:43.907 [2024-11-05 12:51:13.038810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.038851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.050777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f35f0 00:36:43.907 [2024-11-05 12:51:13.051718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.051761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.065385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166e8088 00:36:43.907 [2024-11-05 12:51:13.066989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.067032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.078074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166f1868 00:36:43.907 [2024-11-05 12:51:13.079857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.079924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.089274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:43.907 [2024-11-05 12:51:13.089496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.089540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.103455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:43.907 [2024-11-05 12:51:13.103742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.103785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.117574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:43.907 [2024-11-05 12:51:13.117868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.117912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.131683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:43.907 [2024-11-05 12:51:13.131920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.131948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:43.907 [2024-11-05 12:51:13.145947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:43.907 [2024-11-05 12:51:13.146162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-11-05 12:51:13.146189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.165 [2024-11-05 12:51:13.160379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.165 [2024-11-05 12:51:13.160617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.165 [2024-11-05 12:51:13.160659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.165 [2024-11-05 12:51:13.174349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.165 [2024-11-05 12:51:13.174586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.165 [2024-11-05 12:51:13.174629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.165 [2024-11-05 12:51:13.188589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.165 [2024-11-05 12:51:13.188826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.165 [2024-11-05 12:51:13.188876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.165 [2024-11-05 12:51:13.202644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.202928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.202975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.216967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.217287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.217315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.231005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.231226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.231268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.245325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.245563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.245591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.259582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.259818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.259870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.273752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.273960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.273987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.287736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.288032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.288063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.301700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.301920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.301946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.315756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.316048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.316094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.329768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.330003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.330030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.343916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.344119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.344160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.357928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.358239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.358282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.372287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.372544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.372588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.386345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.386615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.386641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.166 [2024-11-05 12:51:13.400290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.166 [2024-11-05 12:51:13.400488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.166 [2024-11-05 12:51:13.400531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.414363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.414597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.414640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.428569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.428770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.428796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.442725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.442968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.443016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.456945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.457155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.457197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.470960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.471192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.471231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.485113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.485394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.499151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.499432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.513330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.513595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.513639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.424 [2024-11-05 12:51:13.527399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.424 [2024-11-05 12:51:13.527688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.424 [2024-11-05 12:51:13.527732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.541593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.541771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.541795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.555656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.555901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.555928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.569497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.569747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.569795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.583462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.583679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.583718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.597343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.597583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.597626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.611570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.611774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.611814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.625440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.625660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.625703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.639648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.639875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.639915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.425 [2024-11-05 12:51:13.653616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.425 [2024-11-05 12:51:13.653946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.425 [2024-11-05 12:51:13.653991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.667700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.667941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.667969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.681769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.682050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.682094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.696003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.696293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.696337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.710107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.710410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.710454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.724395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.724621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.724675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.738480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.738765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.738810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.752808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.753073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.753103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.767035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.767270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.767314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.683 [2024-11-05 12:51:13.781210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.683 [2024-11-05 12:51:13.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.683 [2024-11-05 12:51:13.781476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.795439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.795719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.795763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 18777.00 IOPS, 73.35 MiB/s [2024-11-05T11:51:13.922Z] [2024-11-05 12:51:13.809631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.809961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.810000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.823846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.824162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.824189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.838160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.838403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.838445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.852368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.852583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.852627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.866431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.866716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.866760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.880638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.880915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.880943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.894911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.895113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.895160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.909028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.909237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.909265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.684 [2024-11-05 12:51:13.922837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.684 [2024-11-05 12:51:13.923054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.684 [2024-11-05 12:51:13.923083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:13.936718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:13.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:13.937023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:13.950817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:13.951119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:13.951173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:13.965095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:13.965376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:13.965419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:13.979197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:13.979462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:13.979507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:13.993324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:13.993553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:13.993579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.007303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.007542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.007585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.021574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.021880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.035757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.035977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.036025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.050122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.050329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.064132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.064411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.064456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.078450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.078722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.078768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.092518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.092804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.092847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.106603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.942 [2024-11-05 12:51:14.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.942 [2024-11-05 12:51:14.107013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.942 [2024-11-05 12:51:14.120808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.943 [2024-11-05 12:51:14.121048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.943 [2024-11-05 12:51:14.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.943 [2024-11-05 12:51:14.135031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.943 [2024-11-05 12:51:14.135266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.943 [2024-11-05 12:51:14.135310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.943 [2024-11-05 12:51:14.149204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.943 [2024-11-05 12:51:14.149410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.943 [2024-11-05 12:51:14.149450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.943 [2024-11-05 12:51:14.163388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.943 [2024-11-05 12:51:14.163657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.943 [2024-11-05 12:51:14.163701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:44.943 [2024-11-05 12:51:14.177584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:44.943 [2024-11-05 12:51:14.177842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:44.943 [2024-11-05 12:51:14.177900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.191384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.191602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.191627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.205532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.205744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.205784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.219717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.219960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.219990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.233779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.233972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.234001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.247689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.247924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.247952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.261594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.261905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.261934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.275629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.275919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.275947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.289958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.290197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.290241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.304071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.304309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.304360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.318412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.318717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.318759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.332512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.332751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.332779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.346650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.346908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.346936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.360495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.201 [2024-11-05 12:51:14.360715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.201 [2024-11-05 12:51:14.360759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.201 [2024-11-05 12:51:14.374828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.202 [2024-11-05 12:51:14.375056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.202 [2024-11-05 12:51:14.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.202 [2024-11-05 12:51:14.388950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.202 [2024-11-05 12:51:14.389175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.202 [2024-11-05 12:51:14.389219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.202 [2024-11-05 12:51:14.403170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.202 [2024-11-05 12:51:14.403428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.202 [2024-11-05 12:51:14.403471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.202 [2024-11-05 12:51:14.417268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.202 [2024-11-05 12:51:14.417501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.202 [2024-11-05 12:51:14.417532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.202 [2024-11-05 12:51:14.431457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.202 [2024-11-05 12:51:14.431754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.202 [2024-11-05 12:51:14.431796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.445375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.445617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.445661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.459599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.459810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.459851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.473628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.473913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.473958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.487684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.487908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.487934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.501762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.502069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.502099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.515908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.516283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.516312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.530052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.530263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.530306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.544209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.544483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.544519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.558152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.558435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.558480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.460 [2024-11-05 12:51:14.572322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.460 [2024-11-05 12:51:14.572632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.460 [2024-11-05 12:51:14.572664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.586425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.586632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.586657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.600462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.600670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.600712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.614569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.614812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.614858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.628640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.628943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.628974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.642898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.643188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.643234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.657132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.657342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.657385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.671367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.671603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.671654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.685530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.685756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.685796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.461 [2024-11-05 12:51:14.699568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.461 [2024-11-05 12:51:14.699814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.461 [2024-11-05 12:51:14.699844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.713600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.713834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.713884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.727825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.728047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.728090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.741976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.756005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.756261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.756291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.769906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.770108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.783991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.784249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.784293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 [2024-11-05 12:51:14.798144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.798367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.798406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 18441.00 IOPS, 72.04 MiB/s [2024-11-05T11:51:14.957Z] [2024-11-05 12:51:14.812103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed2c0) with pdu=0x2000166df988 00:36:45.719 [2024-11-05 12:51:14.812327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:45.719 [2024-11-05 12:51:14.812371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:45.719 00:36:45.719 Latency(us) 00:36:45.719 [2024-11-05T11:51:14.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.719 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:45.719 nvme0n1 : 2.01 18438.22 72.02 0.00 0.00 6925.94 2682.12 16699.54 00:36:45.719 [2024-11-05T11:51:14.957Z] =================================================================================================================== 00:36:45.719 [2024-11-05T11:51:14.957Z] Total : 18438.22 72.02 0.00 0.00 6925.94 2682.12 16699.54 00:36:45.719 { 00:36:45.719 "results": [ 00:36:45.719 { 00:36:45.719 "job": "nvme0n1", 00:36:45.719 "core_mask": "0x2", 00:36:45.719 "workload": "randwrite", 00:36:45.719 "status": "finished", 00:36:45.719 "queue_depth": 128, 00:36:45.719 "io_size": 4096, 00:36:45.719 "runtime": 2.00681, 00:36:45.719 "iops": 18438.217868158918, 00:36:45.719 "mibps": 72.02428854749577, 00:36:45.719 "io_failed": 0, 00:36:45.719 "io_timeout": 0, 00:36:45.719 "avg_latency_us": 6925.9414522938705, 00:36:45.719 "min_latency_us": 2682.1214814814816, 00:36:45.719 "max_latency_us": 16699.543703703705 00:36:45.719 } 00:36:45.719 ], 00:36:45.719 "core_count": 1 00:36:45.719 } 00:36:45.719 12:51:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:45.719 12:51:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:45.719 12:51:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:45.719 12:51:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:45.719 | .driver_specific 00:36:45.719 | .nvme_error 00:36:45.719 | .status_code 00:36:45.719 | .command_transient_transport_error' 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 809122 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 809122 ']' 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 809122 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 809122 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 809122' 00:36:45.977 killing process with pid 809122 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 809122 00:36:45.977 Received shutdown signal, test time was about 2.000000 seconds 00:36:45.977 00:36:45.977 Latency(us) 00:36:45.977 [2024-11-05T11:51:15.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:45.977 [2024-11-05T11:51:15.215Z] =================================================================================================================== 00:36:45.977 [2024-11-05T11:51:15.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:45.977 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 809122 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=809546 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 809546 /var/tmp/bperf.sock 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 809546 ']' 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:46.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:46.235 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.235 [2024-11-05 12:51:15.402844] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:46.235 [2024-11-05 12:51:15.402962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid809546 ] 00:36:46.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:46.236 Zero copy mechanism will not be used. 00:36:46.236 [2024-11-05 12:51:15.472266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.493 [2024-11-05 12:51:15.517990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:46.493 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:46.493 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:36:46.493 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.493 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:46.751 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:46.751 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.751 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:46.751 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.751 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:46.751 12:51:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:47.317 nvme0n1 00:36:47.317 12:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:47.317 12:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.317 12:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:47.317 12:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.317 12:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:47.317 12:51:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:47.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:47.317 Zero copy mechanism will not be used. 00:36:47.317 Running I/O for 2 seconds... 00:36:47.317 [2024-11-05 12:51:16.387657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.388019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.388057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.394293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.394627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.394657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.400766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.401074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.401107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.407178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.407532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.407561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.413663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.413977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.414008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.420206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.420548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.420591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.426547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.426822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.426872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.432885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.433170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.433201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.439155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.439467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.439498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.445567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.445875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.445907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.451886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.452158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.452211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.458278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.458572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.458600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.464641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.464996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.465026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.471214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.471516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.471545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.477467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.317 [2024-11-05 12:51:16.477574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.317 [2024-11-05 12:51:16.477601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.317 [2024-11-05 12:51:16.484424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.484837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.484887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.491622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.491809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.491852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.498227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.498293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.498320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.504074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.504143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.504170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.509834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.509935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.509961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.516276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.516356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.516384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.521948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.522022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.522050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.527786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.527884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.527923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.533452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.533519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.533545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.539350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.539444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.539470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.546528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.546632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.546662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.318 [2024-11-05 12:51:16.553501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.318 [2024-11-05 12:51:16.553603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.318 [2024-11-05 12:51:16.553631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.559491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.559606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.559645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.564669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.564754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.564781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.569561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.569677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.569705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.574795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.574959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.574987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.580702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.580881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.580917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.586828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.587016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.587046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.592137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.592253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.592279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.596835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.596973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.597000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.601725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.601820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.601871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.606625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.606730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.606760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.611325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.611420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.611447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.616269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.616373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.616403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.621132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.621245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.621273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.626079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.626154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.626182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.632191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.632261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.632288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.638632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.638854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.638892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.645234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.645311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.577 [2024-11-05 12:51:16.645353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.577 [2024-11-05 12:51:16.651981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.577 [2024-11-05 12:51:16.652166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.652209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.658597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.658750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.658780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.664347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.664468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.664496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.669363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.669437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.669462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.674392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.674488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.674513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.679585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.679650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.679677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.685417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.685501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.685527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.691502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.691588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.691614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.698323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.698483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.698511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.704297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.704464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.704492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.710417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.710496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.710537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.716384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.716510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.716538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.722582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.722738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.722766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.728652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.728784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.728818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.735344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.735498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.735527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.741229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.741366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.741394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.746027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.746146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.746189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.751313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.751395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.751421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.756415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.756544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.756573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.761777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.761953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.761983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.767931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.768115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.768144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.773833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.773983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.780789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.780999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.781029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.787467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.787547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.787588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.793266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.793360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.793386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.798800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.798887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.798915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.803553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.803621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.578 [2024-11-05 12:51:16.803646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.578 [2024-11-05 12:51:16.808265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.578 [2024-11-05 12:51:16.808337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.579 [2024-11-05 12:51:16.808363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.579 [2024-11-05 12:51:16.813061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.579 [2024-11-05 12:51:16.813132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.579 [2024-11-05 12:51:16.813160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.837 [2024-11-05 12:51:16.817930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.837 [2024-11-05 12:51:16.818002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.837 [2024-11-05 12:51:16.818030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.837 [2024-11-05 12:51:16.822758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.837 [2024-11-05 12:51:16.822826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.837 [2024-11-05 12:51:16.822874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.837 [2024-11-05 12:51:16.827599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.837 [2024-11-05 12:51:16.827674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.837 [2024-11-05 12:51:16.827700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.837 [2024-11-05 12:51:16.832330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.837 [2024-11-05 12:51:16.832401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.837 [2024-11-05 12:51:16.832427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.837 [2024-11-05 12:51:16.837205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.837 [2024-11-05 12:51:16.837274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.837 [2024-11-05 12:51:16.837300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.837 [2024-11-05 12:51:16.841958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.842027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.842053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.846715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.846779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.846805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.851536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.851607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.851633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.856314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.856435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.856463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.861611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.861796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.861824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.867636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.867779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.867813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.874041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.874169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.874212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.880961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.881044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.881072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.887588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.887783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.887810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.894434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.894612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.894642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.901370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.901590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.901620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.908642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.908737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.908764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.915236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.915352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.915380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.922037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.922152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.922197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.928674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.928877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.928907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.936015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.936120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.936148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.943089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.943186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.943214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.949727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.949836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.949888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.956537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.956732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.956761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.963113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.963208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.963235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.969937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.970088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.970117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.976833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.977068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.983515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.983636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.990415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.990596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.990626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:16.997197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:16.997301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:16.997328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:17.003940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.838 [2024-11-05 12:51:17.004108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.838 [2024-11-05 12:51:17.004137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.838 [2024-11-05 12:51:17.010998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.011110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.011137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.018216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.018365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.018395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.025319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.025475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.025504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.032236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.032375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.032403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.039067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.039218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.039247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.046074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.046237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.046271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.053210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.053417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.053446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.060107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.060211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.060239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.066902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.067083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.067112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:47.839 [2024-11-05 12:51:17.073789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:47.839 [2024-11-05 12:51:17.073925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:47.839 [2024-11-05 12:51:17.073955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.080838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.080999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.081028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.087969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.088131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.088160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.095186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.095286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.095314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.102179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.102295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.102323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.108789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.108930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.108960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.115661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.115784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.115813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.122484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.122664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.122693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.129222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.129324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.129352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.136142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.136285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.136314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.143164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.143325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.143354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.150005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.150114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.150156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.157082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.157274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.157305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.097 [2024-11-05 12:51:17.164247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.097 [2024-11-05 12:51:17.164370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.097 [2024-11-05 12:51:17.164399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.170929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.171125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.171170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.177845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.178007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.178037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.184375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.184508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.184536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.191348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.191521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.191548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.198443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.198560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.198587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.205298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.205452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.205480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.212312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.212416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.212443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.218984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.219146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.219188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.225742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.225885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.225927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.232500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.232654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.232682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.239229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.239395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.239422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.246335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.246518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.246546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.253290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.253379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.253406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.260179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.260287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.260314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.267089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.267268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.267297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.273971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.274106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.274135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.281203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.281390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.281419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.288176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.288286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.288313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.295290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.295406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.295433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.302337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.302455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.302481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.309078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.309172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.309201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.315741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.315898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.315927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.322644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.322759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.322788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.329592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.329683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.329710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.098 [2024-11-05 12:51:17.336452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.098 [2024-11-05 12:51:17.336635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.098 [2024-11-05 12:51:17.336664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.357 [2024-11-05 12:51:17.343349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.357 [2024-11-05 12:51:17.343480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.357 [2024-11-05 12:51:17.343508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.357 [2024-11-05 12:51:17.350307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.350500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.350529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.357103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.357243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.364047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.364241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.364270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.371048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.371187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.371216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.358 4885.00 IOPS, 610.62 MiB/s [2024-11-05T11:51:17.596Z] [2024-11-05 12:51:17.379206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.379448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.379478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.385020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.385155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.385199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.389493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.389606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.389633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.393836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.393988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.394018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.398192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.398319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.398350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.402564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.402675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.402702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.406981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.407097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.407129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.411328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.411439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.411467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.415599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.415713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.415741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.419979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.420098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.420128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.424368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.424483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.424509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.428636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.428755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.428782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.433018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.433138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.433181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.437417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.437552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.437577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.441730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.441857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.441892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.446128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.446260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.446288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.450430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.450542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.450569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.455109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.455261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.455290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.460355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.460502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.460529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.465627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.465827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.465855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.471636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.471820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.471870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.476997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.477146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.477191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.358 [2024-11-05 12:51:17.482091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.358 [2024-11-05 12:51:17.482229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.358 [2024-11-05 12:51:17.482257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.487400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.487552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.487579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.492524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.492663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.492705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.497795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.497991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.498022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.503043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.503203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.508214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.508399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.508426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.513485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.513616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.513643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.518687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.518851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.523894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.524032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.524065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.528978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.529119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.529147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.534178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.534321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.534348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.539477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.539642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.539682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.544834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.544994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.545022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.549432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.549564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.549608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.553812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.553970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.553998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.559358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.559457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.559484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.564264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.564365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.564392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.568681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.568793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.568820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.573645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.573829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.573881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.578748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.578925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.578956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.584637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.584744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.584771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.589625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.589753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.589780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.359 [2024-11-05 12:51:17.594135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.359 [2024-11-05 12:51:17.594239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.359 [2024-11-05 12:51:17.594267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.598826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.599002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.599030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.603559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.603750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.608203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.608366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.608393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.612770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.612932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.612960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.617396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.617527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.617554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.621970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.622145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.622173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.626504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.626650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.626676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.631049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.631202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.631229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.635406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.635517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.635544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.640282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.640463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.640492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.645501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.618 [2024-11-05 12:51:17.645676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.618 [2024-11-05 12:51:17.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.618 [2024-11-05 12:51:17.650715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.650896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.650932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.656587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.656744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.662192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.662443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.667319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.667521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.667551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.672481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.672609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.672638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.677661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.677809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.677838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.682771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.682909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.682936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.688096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.688221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.688249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.693271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.693440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.693468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.698603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.698764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.698792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.703847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.704097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.704127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.709066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.709192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.709219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.714353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.714509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.714537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.719705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.719872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.719902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.725163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.725324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.725353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.730382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.730530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.730559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.735577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.735732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.735761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.740740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.740942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.740972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.745982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.746140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.746185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.751191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.751340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.751369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.756509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.756704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.756732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.761740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.761926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.761956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.766971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.767098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.767126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.772182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.772318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.772345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.777348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.777514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.777541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.782632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.782841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.782878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.787894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.619 [2024-11-05 12:51:17.788052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.619 [2024-11-05 12:51:17.788086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.619 [2024-11-05 12:51:17.793271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.793391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.793419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.798432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.798595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.798622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.803645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.803799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.803826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.808909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.809163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.809194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.814330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.814479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.814507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.819399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.819563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.819590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.824614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.824758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.824785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.829966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.830108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.830136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.835206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.835425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.835469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.840649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.840915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.840946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.845699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.845875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.845904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.851049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.851195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.851224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.620 [2024-11-05 12:51:17.856344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.620 [2024-11-05 12:51:17.856515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.620 [2024-11-05 12:51:17.856543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.861502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.861714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.861753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.866641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.866810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.866838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.871725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.871923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.871952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.876874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.877075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.877103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.882054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.882188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.882216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.887214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.887378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.892345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.892603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.892632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.897508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.897640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.897666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.902681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.902843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.902895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.907904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.908063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.908090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.913192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.913396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.913424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.918399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.918597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.918625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.923629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.923794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.923827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.928754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.928899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.933817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.933973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.934002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.939085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.939301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.944219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.944370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.944397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.949341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.949461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.949489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.954380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.954537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.954565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.879 [2024-11-05 12:51:17.959558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.879 [2024-11-05 12:51:17.959747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.879 [2024-11-05 12:51:17.959774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.964775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.964956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.964983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.970130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.970289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.970316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.975238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.975369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.975396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.980482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.980603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.980630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.985739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.985926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.985954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.990973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.991125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.991167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:17.996070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:17.996193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:17.996221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.001248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.001408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.001435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.006365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.006496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.006525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.011500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.011696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.011740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.016590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.016778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.016806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.022009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.022169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.022199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.027286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.027435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.027476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.032470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.032650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.032676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.037693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.037867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.042935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.043096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.048145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.048274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.048301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.053333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.053478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.058373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.058557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.058585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.063518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.063638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.063665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.068661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.068883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.068911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.073913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.074075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.074102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.078927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.079142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.079185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.084187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.084340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.084367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.089187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.089348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.089375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.094394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.094510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.880 [2024-11-05 12:51:18.094536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:48.880 [2024-11-05 12:51:18.099610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.880 [2024-11-05 12:51:18.099730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.881 [2024-11-05 12:51:18.099757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:48.881 [2024-11-05 12:51:18.104674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.881 [2024-11-05 12:51:18.104797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.881 [2024-11-05 12:51:18.104823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:48.881 [2024-11-05 12:51:18.109814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.881 [2024-11-05 12:51:18.110034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.881 [2024-11-05 12:51:18.110075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:48.881 [2024-11-05 12:51:18.115086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:48.881 [2024-11-05 12:51:18.115240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:48.881 [2024-11-05 12:51:18.115268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.120198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.120410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.120438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.125297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.125496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.125522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.130341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.130456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.130483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.135523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.135686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.135713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.140806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.141096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.141128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.146069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.146220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.146252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.151196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.151408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.151434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.156355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.156518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.156543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.161571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.161732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.161759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.166889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.167046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.167074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.172001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.172139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.172173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.177085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.177223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.177249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.182260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.182418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.182445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.187372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.187502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.187529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.192462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.192686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.192714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.197595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.197765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.197793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.202937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.203111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.203139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.208349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.208581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.208611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.213657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.140 [2024-11-05 12:51:18.213783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.140 [2024-11-05 12:51:18.213811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.140 [2024-11-05 12:51:18.218775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.218992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.219023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.224074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.224176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.224203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.229137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.229294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.229320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.234241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.234336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.234363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.239286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.239417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.239443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.244530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.244755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.244784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.249714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.249866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.249894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.254766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.254957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.254987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.259746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.259874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.259901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.264957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.265075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.265102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.270067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.270220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.270248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.275154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.275398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.275428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.280195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.280375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.280411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.285280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.285453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.285481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.290386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.290530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.290558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.295567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.295740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.295767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.300666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.300856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.300893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.305868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.306024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.306052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.310935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.311106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.311134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.316255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.316424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.316453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.321338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.321511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.326505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.326714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.326741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.331639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.331806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.331833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.336944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.337126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.337154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.342016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.342201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.347136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.347318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.347346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.352329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.141 [2024-11-05 12:51:18.352496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.141 [2024-11-05 12:51:18.352523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.141 [2024-11-05 12:51:18.357398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.142 [2024-11-05 12:51:18.357568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.142 [2024-11-05 12:51:18.357596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.142 [2024-11-05 12:51:18.362759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.142 [2024-11-05 12:51:18.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.142 [2024-11-05 12:51:18.362945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:49.142 [2024-11-05 12:51:18.367834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.142 [2024-11-05 12:51:18.367992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.142 [2024-11-05 12:51:18.368020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:49.142 [2024-11-05 12:51:18.373057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.142 [2024-11-05 12:51:18.373196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.142 [2024-11-05 12:51:18.373222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:49.142 [2024-11-05 12:51:18.378399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15ed600) with pdu=0x2000166fef90 00:36:49.400 [2024-11-05 12:51:18.379775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:49.400 [2024-11-05 12:51:18.379807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:49.400 5486.00 IOPS, 685.75 MiB/s 00:36:49.400 Latency(us) 00:36:49.400 [2024-11-05T11:51:18.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.400 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:49.400 nvme0n1 : 2.00 5483.11 685.39 0.00 0.00 2910.70 2026.76 11747.93 00:36:49.400 [2024-11-05T11:51:18.638Z] =================================================================================================================== 00:36:49.400 [2024-11-05T11:51:18.638Z] Total : 5483.11 685.39 0.00 0.00 2910.70 2026.76 11747.93 00:36:49.400 { 00:36:49.400 "results": [ 00:36:49.400 { 00:36:49.400 "job": "nvme0n1", 00:36:49.400 "core_mask": "0x2", 00:36:49.400 "workload": "randwrite", 00:36:49.400 "status": "finished", 00:36:49.400 "queue_depth": 16, 00:36:49.400 "io_size": 131072, 00:36:49.400 "runtime": 2.003791, 00:36:49.400 "iops": 5483.106771115351, 00:36:49.400 "mibps": 685.3883463894189, 00:36:49.400 "io_failed": 0, 00:36:49.400 "io_timeout": 0, 00:36:49.400 "avg_latency_us": 2910.6996252136364, 00:36:49.400 "min_latency_us": 2026.7614814814815, 00:36:49.400 "max_latency_us": 11747.934814814815 00:36:49.400 } 00:36:49.400 ], 00:36:49.400 "core_count": 1 00:36:49.400 } 00:36:49.400 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:49.400 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:49.400 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:49.400 | .driver_specific 00:36:49.400 | .nvme_error 00:36:49.400 | .status_code 00:36:49.400 | .command_transient_transport_error' 00:36:49.400 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 809546 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 809546 ']' 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 809546 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 809546 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 809546' 00:36:49.658 killing process with pid 809546 00:36:49.658 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 809546 00:36:49.658 Received shutdown signal, test time was about 2.000000 seconds 00:36:49.658 00:36:49.658 Latency(us) 00:36:49.658 [2024-11-05T11:51:18.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.658 [2024-11-05T11:51:18.896Z] =================================================================================================================== 00:36:49.658 [2024-11-05T11:51:18.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.659 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 809546 00:36:49.916 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 807676 00:36:49.916 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 807676 ']' 00:36:49.916 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 807676 00:36:49.916 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 807676 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 807676' 00:36:49.917 killing process with pid 807676 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 807676 00:36:49.917 12:51:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 807676 00:36:50.175 00:36:50.175 real 0m15.361s 00:36:50.175 user 0m30.795s 00:36:50.175 sys 0m4.258s 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:50.175 ************************************ 00:36:50.175 END TEST nvmf_digest_error 00:36:50.175 ************************************ 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.175 rmmod nvme_tcp 00:36:50.175 rmmod nvme_fabrics 00:36:50.175 rmmod nvme_keyring 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 807676 ']' 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 807676 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 807676 ']' 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 807676 00:36:50.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (807676) - No such process 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 807676 is not found' 00:36:50.175 Process with pid 807676 is not found 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.175 12:51:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:52.082 00:36:52.082 real 0m35.486s 00:36:52.082 user 1m2.566s 00:36:52.082 sys 0m10.168s 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:52.082 ************************************ 00:36:52.082 END TEST nvmf_digest 00:36:52.082 ************************************ 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:52.082 12:51:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.341 ************************************ 00:36:52.341 START TEST nvmf_bdevperf 00:36:52.341 ************************************ 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:52.341 * Looking for test storage... 00:36:52.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.341 --rc genhtml_branch_coverage=1 00:36:52.341 --rc genhtml_function_coverage=1 00:36:52.341 --rc genhtml_legend=1 00:36:52.341 --rc geninfo_all_blocks=1 00:36:52.341 --rc geninfo_unexecuted_blocks=1 00:36:52.341 00:36:52.341 ' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.341 --rc genhtml_branch_coverage=1 00:36:52.341 --rc genhtml_function_coverage=1 00:36:52.341 --rc genhtml_legend=1 00:36:52.341 --rc geninfo_all_blocks=1 00:36:52.341 --rc geninfo_unexecuted_blocks=1 00:36:52.341 00:36:52.341 ' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.341 --rc genhtml_branch_coverage=1 00:36:52.341 --rc genhtml_function_coverage=1 00:36:52.341 --rc genhtml_legend=1 00:36:52.341 --rc geninfo_all_blocks=1 00:36:52.341 --rc geninfo_unexecuted_blocks=1 00:36:52.341 00:36:52.341 ' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.341 --rc genhtml_branch_coverage=1 00:36:52.341 --rc genhtml_function_coverage=1 00:36:52.341 --rc genhtml_legend=1 00:36:52.341 --rc geninfo_all_blocks=1 00:36:52.341 --rc geninfo_unexecuted_blocks=1 00:36:52.341 00:36:52.341 ' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.341 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:52.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:52.342 12:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.873 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:54.873 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:54.873 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:54.873 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:54.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:54.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:54.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:54.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:54.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:54.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:36:54.874 00:36:54.874 --- 10.0.0.2 ping statistics --- 00:36:54.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.874 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:54.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:54.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:36:54.874 00:36:54.874 --- 10.0.0.1 ping statistics --- 00:36:54.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.874 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:54.874 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=811910 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 811910 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 811910 ']' 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:54.875 12:51:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.875 [2024-11-05 12:51:23.836910] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:54.875 [2024-11-05 12:51:23.836987] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:54.875 [2024-11-05 12:51:23.912101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:54.875 [2024-11-05 12:51:23.957593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:54.875 [2024-11-05 12:51:23.957651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:54.875 [2024-11-05 12:51:23.957674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:54.875 [2024-11-05 12:51:23.957685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:54.875 [2024-11-05 12:51:23.957695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:54.875 [2024-11-05 12:51:23.959175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:54.875 [2024-11-05 12:51:23.959239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:54.875 [2024-11-05 12:51:23.959243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:54.875 [2024-11-05 12:51:24.105117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.875 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.133 Malloc0 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.133 [2024-11-05 12:51:24.161352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:55.133 { 00:36:55.133 "params": { 00:36:55.133 "name": "Nvme$subsystem", 00:36:55.133 "trtype": "$TEST_TRANSPORT", 00:36:55.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:55.133 "adrfam": "ipv4", 00:36:55.133 "trsvcid": "$NVMF_PORT", 00:36:55.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:55.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:55.133 "hdgst": ${hdgst:-false}, 00:36:55.133 "ddgst": ${ddgst:-false} 00:36:55.133 }, 00:36:55.133 "method": "bdev_nvme_attach_controller" 00:36:55.133 } 00:36:55.133 EOF 00:36:55.133 )") 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:55.133 12:51:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:55.133 "params": { 00:36:55.133 "name": "Nvme1", 00:36:55.133 "trtype": "tcp", 00:36:55.133 "traddr": "10.0.0.2", 00:36:55.133 "adrfam": "ipv4", 00:36:55.133 "trsvcid": "4420", 00:36:55.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:55.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:55.133 "hdgst": false, 00:36:55.133 "ddgst": false 00:36:55.133 }, 00:36:55.133 "method": "bdev_nvme_attach_controller" 00:36:55.133 }' 00:36:55.133 [2024-11-05 12:51:24.216658] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:55.133 [2024-11-05 12:51:24.216735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid811970 ] 00:36:55.133 [2024-11-05 12:51:24.287742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.133 [2024-11-05 12:51:24.336760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.699 Running I/O for 1 seconds... 00:36:56.632 8418.00 IOPS, 32.88 MiB/s 00:36:56.632 Latency(us) 00:36:56.632 [2024-11-05T11:51:25.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:56.632 Verification LBA range: start 0x0 length 0x4000 00:36:56.632 Nvme1n1 : 1.02 8487.19 33.15 0.00 0.00 15014.73 3203.98 19612.25 00:36:56.632 [2024-11-05T11:51:25.870Z] =================================================================================================================== 00:36:56.632 [2024-11-05T11:51:25.870Z] Total : 8487.19 33.15 0.00 0.00 15014.73 3203.98 19612.25 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=812195 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:56.890 { 00:36:56.890 "params": { 00:36:56.890 "name": "Nvme$subsystem", 00:36:56.890 "trtype": "$TEST_TRANSPORT", 00:36:56.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:56.890 "adrfam": "ipv4", 00:36:56.890 "trsvcid": "$NVMF_PORT", 00:36:56.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:56.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:56.890 "hdgst": ${hdgst:-false}, 00:36:56.890 "ddgst": ${ddgst:-false} 00:36:56.890 }, 00:36:56.890 "method": "bdev_nvme_attach_controller" 00:36:56.890 } 00:36:56.890 EOF 00:36:56.890 )") 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:56.890 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:56.891 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:56.891 12:51:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:56.891 "params": { 00:36:56.891 "name": "Nvme1", 00:36:56.891 "trtype": "tcp", 00:36:56.891 "traddr": "10.0.0.2", 00:36:56.891 "adrfam": "ipv4", 00:36:56.891 "trsvcid": "4420", 00:36:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:56.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:56.891 "hdgst": false, 00:36:56.891 "ddgst": false 00:36:56.891 }, 00:36:56.891 "method": "bdev_nvme_attach_controller" 00:36:56.891 }' 00:36:56.891 [2024-11-05 12:51:25.931360] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:36:56.891 [2024-11-05 12:51:25.931450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812195 ] 00:36:56.891 [2024-11-05 12:51:26.000199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.891 [2024-11-05 12:51:26.045075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.148 Running I/O for 15 seconds... 00:36:59.453 8583.00 IOPS, 33.53 MiB/s [2024-11-05T11:51:28.952Z] 8564.00 IOPS, 33.45 MiB/s [2024-11-05T11:51:28.952Z] 12:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 811910 00:36:59.714 12:51:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:59.714 [2024-11-05 12:51:28.899384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.899961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.899976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.899990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.900978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.900992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.714 [2024-11-05 12:51:28.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.714 [2024-11-05 12:51:28.901669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.714 [2024-11-05 12:51:28.901684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:59.715 [2024-11-05 12:51:28.901930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.901976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.901991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.902966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.902986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:59.715 [2024-11-05 12:51:28.903327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1705c20 is same with the state(6) to be set 00:36:59.715 [2024-11-05 12:51:28.903354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:59.715 [2024-11-05 12:51:28.903364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:59.715 [2024-11-05 12:51:28.903373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49328 len:8 PRP1 0x0 PRP2 0x0 00:36:59.715 [2024-11-05 12:51:28.903385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.715 [2024-11-05 12:51:28.903537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.715 [2024-11-05 12:51:28.903583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.715 [2024-11-05 12:51:28.903610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:59.715 [2024-11-05 12:51:28.903636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:59.715 [2024-11-05 12:51:28.903648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.715 [2024-11-05 12:51:28.906731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.715 [2024-11-05 12:51:28.906767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.715 [2024-11-05 12:51:28.907437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.715 [2024-11-05 12:51:28.907466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.715 [2024-11-05 12:51:28.907482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.715 [2024-11-05 12:51:28.907719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.715 [2024-11-05 12:51:28.907961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.715 [2024-11-05 12:51:28.907982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.715 [2024-11-05 12:51:28.907999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.715 [2024-11-05 12:51:28.908016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.715 [2024-11-05 12:51:28.920312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.715 [2024-11-05 12:51:28.920726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.715 [2024-11-05 12:51:28.920755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.715 [2024-11-05 12:51:28.920771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.715 [2024-11-05 12:51:28.921031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.715 [2024-11-05 12:51:28.921266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.715 [2024-11-05 12:51:28.921286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.715 [2024-11-05 12:51:28.921298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.715 [2024-11-05 12:51:28.921310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.715 [2024-11-05 12:51:28.933381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.715 [2024-11-05 12:51:28.933793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.715 [2024-11-05 12:51:28.933822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.715 [2024-11-05 12:51:28.933839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.715 [2024-11-05 12:51:28.934089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.715 [2024-11-05 12:51:28.934314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.715 [2024-11-05 12:51:28.934334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.715 [2024-11-05 12:51:28.934347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.716 [2024-11-05 12:51:28.934359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.716 [2024-11-05 12:51:28.946463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.716 [2024-11-05 12:51:28.946849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.716 [2024-11-05 12:51:28.946901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.716 [2024-11-05 12:51:28.946919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.716 [2024-11-05 12:51:28.947157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.716 [2024-11-05 12:51:28.947385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.716 [2024-11-05 12:51:28.947407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.716 [2024-11-05 12:51:28.947421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.716 [2024-11-05 12:51:28.947434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:28.960007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:28.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:28.960466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:28.960487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:28.960721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:28.960973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:28.960996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:28.961009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:28.961022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:28.973072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:28.973420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:28.973450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:28.973467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:28.973708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:28.973957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:28.973979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:28.973993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:28.974005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:28.986047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:28.986396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:28.986425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:28.986442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:28.986679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:28.986908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:28.986929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:28.986942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:28.986954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:28.999005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:28.999363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:28.999391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:28.999407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:28.999642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:28.999849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:28.999896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:28.999911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:28.999923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:29.012001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:29.012409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:29.012437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:29.012453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:29.012688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:29.012921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:29.012957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:29.012971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:29.012984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:29.025205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:29.025563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:29.025591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:29.025607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:29.025822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:29.026054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:29.026075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:29.026088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:29.026100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:29.038411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:29.038769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:29.038796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:29.038812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:29.039058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:29.039288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.975 [2024-11-05 12:51:29.039308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.975 [2024-11-05 12:51:29.039326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.975 [2024-11-05 12:51:29.039339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.975 [2024-11-05 12:51:29.051654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.975 [2024-11-05 12:51:29.052087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.975 [2024-11-05 12:51:29.052117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.975 [2024-11-05 12:51:29.052133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.975 [2024-11-05 12:51:29.052375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.975 [2024-11-05 12:51:29.052581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.052600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.052613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.052624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.064752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.065106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.065134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.065151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.065385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.065605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.065625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.065638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.065650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.077729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.078083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.078112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.078128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.078364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.078567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.078587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.078599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.078612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.090739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.091152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.091181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.091196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.091433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.091636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.091656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.091668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.091680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.103771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.104133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.104162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.104178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.104416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.104619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.104639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.104651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.104663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.116948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.117318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.117347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.117363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.117580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.117782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.117803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.117816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.117828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.130114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.130550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.130578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.130599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.130835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.131037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.131058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.131071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.131083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.143329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.143740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.143769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.143786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.144052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.144261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.144280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.144293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.144305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.156466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.156934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.156965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.156982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.157235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.157428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.157447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.157460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.157472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.976 [2024-11-05 12:51:29.169905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.976 [2024-11-05 12:51:29.170338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.976 [2024-11-05 12:51:29.170370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.976 [2024-11-05 12:51:29.170387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.976 [2024-11-05 12:51:29.170644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.976 [2024-11-05 12:51:29.170877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.976 [2024-11-05 12:51:29.170897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.976 [2024-11-05 12:51:29.170910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.976 [2024-11-05 12:51:29.170923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.977 [2024-11-05 12:51:29.183255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.977 [2024-11-05 12:51:29.183709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.977 [2024-11-05 12:51:29.183756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.977 [2024-11-05 12:51:29.183773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.977 [2024-11-05 12:51:29.184012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.977 [2024-11-05 12:51:29.184260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.977 [2024-11-05 12:51:29.184281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.977 [2024-11-05 12:51:29.184294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.977 [2024-11-05 12:51:29.184306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.977 [2024-11-05 12:51:29.196471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.977 [2024-11-05 12:51:29.196891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.977 [2024-11-05 12:51:29.196948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.977 [2024-11-05 12:51:29.196965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.977 [2024-11-05 12:51:29.197222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.977 [2024-11-05 12:51:29.197427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.977 [2024-11-05 12:51:29.197446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.977 [2024-11-05 12:51:29.197459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.977 [2024-11-05 12:51:29.197470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:59.977 [2024-11-05 12:51:29.209696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:59.977 [2024-11-05 12:51:29.210096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.977 [2024-11-05 12:51:29.210125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:36:59.977 [2024-11-05 12:51:29.210141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:36:59.977 [2024-11-05 12:51:29.210371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:36:59.977 [2024-11-05 12:51:29.210623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:59.977 [2024-11-05 12:51:29.210644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:59.977 [2024-11-05 12:51:29.210663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:59.977 [2024-11-05 12:51:29.210677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.223067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.223447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.223512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.223529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.223762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.224001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.224023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.224037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.224049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.236289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.236645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.236674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.236691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.236928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.237154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.237176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.237190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.237203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.249496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.249850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.249908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.249925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.250181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.250373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.250393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.250406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.250419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.262618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.262991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.263021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.263038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.263292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.263480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.263500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.263512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.263524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.275615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.276020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.276050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.276068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.276308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.276511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.276531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.276544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.276556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 7477.67 IOPS, 29.21 MiB/s [2024-11-05T11:51:29.474Z] [2024-11-05 12:51:29.288750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.289122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.289161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.289178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.289421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.289625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.289645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.289658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.289670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.301892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.236 [2024-11-05 12:51:29.302256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.236 [2024-11-05 12:51:29.302285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.236 [2024-11-05 12:51:29.302307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.236 [2024-11-05 12:51:29.302543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.236 [2024-11-05 12:51:29.302731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.236 [2024-11-05 12:51:29.302751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.236 [2024-11-05 12:51:29.302764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.236 [2024-11-05 12:51:29.302776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.236 [2024-11-05 12:51:29.314957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.315360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.315389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.315405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.315634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.315838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.315867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.315898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.315912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.328030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.328371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.328399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.328416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.328644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.328833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.328853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.328893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.328907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.341184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.341531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.341561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.341577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.341813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.342058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.342080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.342094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.342106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.354181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.354520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.354548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.354564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.354779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.355012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.355032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.355044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.355056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.367363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.367705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.367733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.367749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.368002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.368229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.368250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.368263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.368275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.380399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.380805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.380833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.380849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.381094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.381299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.381319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.381336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.381349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.393491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.393867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.393895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.393912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.394127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.394330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.394350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.394362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.394374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.406484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.406875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.406905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.406936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.407190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.407393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.407414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.407426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.407438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.419835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.420243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.420272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.420288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.420523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.420710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.420729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.420741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.420752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.237 [2024-11-05 12:51:29.433063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.237 [2024-11-05 12:51:29.433423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.237 [2024-11-05 12:51:29.433452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.237 [2024-11-05 12:51:29.433469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.237 [2024-11-05 12:51:29.433703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.237 [2024-11-05 12:51:29.433950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.237 [2024-11-05 12:51:29.433972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.237 [2024-11-05 12:51:29.433986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.237 [2024-11-05 12:51:29.433999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.238 [2024-11-05 12:51:29.446183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.238 [2024-11-05 12:51:29.446568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.238 [2024-11-05 12:51:29.446597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.238 [2024-11-05 12:51:29.446614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.238 [2024-11-05 12:51:29.446836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.238 [2024-11-05 12:51:29.447077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.238 [2024-11-05 12:51:29.447099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.238 [2024-11-05 12:51:29.447112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.238 [2024-11-05 12:51:29.447125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.238 [2024-11-05 12:51:29.459257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.238 [2024-11-05 12:51:29.459665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.238 [2024-11-05 12:51:29.459694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.238 [2024-11-05 12:51:29.459711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.238 [2024-11-05 12:51:29.459961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.238 [2024-11-05 12:51:29.460176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.238 [2024-11-05 12:51:29.460196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.238 [2024-11-05 12:51:29.460209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.238 [2024-11-05 12:51:29.460235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.238 [2024-11-05 12:51:29.472528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.238 [2024-11-05 12:51:29.472877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.238 [2024-11-05 12:51:29.472906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.238 [2024-11-05 12:51:29.472942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.238 [2024-11-05 12:51:29.473178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.238 [2024-11-05 12:51:29.473381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.238 [2024-11-05 12:51:29.473401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.238 [2024-11-05 12:51:29.473429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.238 [2024-11-05 12:51:29.473441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.485756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.486198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.486244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.486260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.486497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.486699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.486719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.486732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.486744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.498769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.499119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.499148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.499163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.499401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.499603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.499623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.499636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.499648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.511912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.512325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.512353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.512369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.512603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.512811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.512832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.512844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.512857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.525110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.525470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.525498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.525515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.525750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.525980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.526001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.526014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.526026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.538328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.538708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.538737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.538753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.539023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.539236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.539256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.539269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.539280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.551889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.552336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.552365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.552382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.552625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.552829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.552850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.552905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.552921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.565135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.565443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.565485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.565501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.565701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.565968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.565989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.566003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.566016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.578412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.578765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.578794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.578811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.579053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.579306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.579327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.579340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.579352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.591671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.592111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.592140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.592157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.592399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.592607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.592627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.592640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.592652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.605072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.605446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.605475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.605492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.605720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.605984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.606008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.606021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.606034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.618299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.618723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.618751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.618768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.619020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.619259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.619279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.496 [2024-11-05 12:51:29.619293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.496 [2024-11-05 12:51:29.619305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.496 [2024-11-05 12:51:29.631529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.496 [2024-11-05 12:51:29.631881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.496 [2024-11-05 12:51:29.631918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.496 [2024-11-05 12:51:29.631936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.496 [2024-11-05 12:51:29.632176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.496 [2024-11-05 12:51:29.632370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.496 [2024-11-05 12:51:29.632389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.632402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.632415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.644778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.645168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.645213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.645235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.645471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.645680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.645700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.645713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.645725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.658143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.658542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.658571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.658589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.658812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.659054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.659076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.659089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.659103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.671595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.671979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.672008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.672025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.672254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.672474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.672494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.672507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.672520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.684840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.685235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.685264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.685281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.685503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.685718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.685738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.685751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.685763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.698131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.698560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.698589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.698606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.698849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.699086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.699106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.699120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.699134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.711264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.711708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.711725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.711965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.712191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.712226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.712239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.712251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.497 [2024-11-05 12:51:29.724446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.497 [2024-11-05 12:51:29.724898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.497 [2024-11-05 12:51:29.724929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.497 [2024-11-05 12:51:29.724946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.497 [2024-11-05 12:51:29.725184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.497 [2024-11-05 12:51:29.725392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.497 [2024-11-05 12:51:29.725412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.497 [2024-11-05 12:51:29.725430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.497 [2024-11-05 12:51:29.725443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.737985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.738358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.738387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.756 [2024-11-05 12:51:29.738403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.756 [2024-11-05 12:51:29.738640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.756 [2024-11-05 12:51:29.738873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.756 [2024-11-05 12:51:29.738897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.756 [2024-11-05 12:51:29.738927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.756 [2024-11-05 12:51:29.738941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.751178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.751489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.751517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.756 [2024-11-05 12:51:29.751533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.756 [2024-11-05 12:51:29.751750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.756 [2024-11-05 12:51:29.752005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.756 [2024-11-05 12:51:29.752027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.756 [2024-11-05 12:51:29.752041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.756 [2024-11-05 12:51:29.752053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.764407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.764758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.764787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.756 [2024-11-05 12:51:29.764804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.756 [2024-11-05 12:51:29.765056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.756 [2024-11-05 12:51:29.765281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.756 [2024-11-05 12:51:29.765302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.756 [2024-11-05 12:51:29.765315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.756 [2024-11-05 12:51:29.765327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.777609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.778026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.778056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.756 [2024-11-05 12:51:29.778073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.756 [2024-11-05 12:51:29.778316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.756 [2024-11-05 12:51:29.778525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.756 [2024-11-05 12:51:29.778544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.756 [2024-11-05 12:51:29.778556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.756 [2024-11-05 12:51:29.778568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.790874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.791198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.791227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.756 [2024-11-05 12:51:29.791243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.756 [2024-11-05 12:51:29.791463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.756 [2024-11-05 12:51:29.791672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.756 [2024-11-05 12:51:29.791702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.756 [2024-11-05 12:51:29.791716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.756 [2024-11-05 12:51:29.791728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.804029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.804464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.804493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.756 [2024-11-05 12:51:29.804510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.756 [2024-11-05 12:51:29.804751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.756 [2024-11-05 12:51:29.805004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.756 [2024-11-05 12:51:29.805026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.756 [2024-11-05 12:51:29.805040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.756 [2024-11-05 12:51:29.805053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.756 [2024-11-05 12:51:29.817363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.756 [2024-11-05 12:51:29.817714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.756 [2024-11-05 12:51:29.817743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.817765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.818006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.818255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.818275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.818288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.818300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.830593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.830982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.831011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.831028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.831256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.831464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.831483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.831495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.831507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.843919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.844310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.844338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.844359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.844578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.844786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.844805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.844818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.844830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.857152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.857562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.857590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.857607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.857842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.858117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.858140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.858172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.858186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.870348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.870719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.870746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.870762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.870986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.871199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.871219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.871232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.871244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.883612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.883991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.884020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.884036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.884258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.884467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.884486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.884499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.884511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.896974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.897316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.897345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.897361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.897585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.897794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.897814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.897831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.897866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.910289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.910736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.910765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.910782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.911021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.911262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.911284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.911297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.911310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.923621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.923997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.924027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.924044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.924291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.924484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.924504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.924516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.924528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.936946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.937334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.937362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.757 [2024-11-05 12:51:29.937377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.757 [2024-11-05 12:51:29.937610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.757 [2024-11-05 12:51:29.937819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.757 [2024-11-05 12:51:29.937853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.757 [2024-11-05 12:51:29.937876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.757 [2024-11-05 12:51:29.937890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.757 [2024-11-05 12:51:29.950258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.757 [2024-11-05 12:51:29.950674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.757 [2024-11-05 12:51:29.950703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.758 [2024-11-05 12:51:29.950719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.758 [2024-11-05 12:51:29.950981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.758 [2024-11-05 12:51:29.951195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.758 [2024-11-05 12:51:29.951215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.758 [2024-11-05 12:51:29.951228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.758 [2024-11-05 12:51:29.951240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.758 [2024-11-05 12:51:29.963432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.758 [2024-11-05 12:51:29.963783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.758 [2024-11-05 12:51:29.963811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.758 [2024-11-05 12:51:29.963827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.758 [2024-11-05 12:51:29.964078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.758 [2024-11-05 12:51:29.964309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.758 [2024-11-05 12:51:29.964329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.758 [2024-11-05 12:51:29.964341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.758 [2024-11-05 12:51:29.964353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.758 [2024-11-05 12:51:29.976610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.758 [2024-11-05 12:51:29.976990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.758 [2024-11-05 12:51:29.977020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.758 [2024-11-05 12:51:29.977037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.758 [2024-11-05 12:51:29.977263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.758 [2024-11-05 12:51:29.977473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.758 [2024-11-05 12:51:29.977492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.758 [2024-11-05 12:51:29.977505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.758 [2024-11-05 12:51:29.977517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:00.758 [2024-11-05 12:51:29.989942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:00.758 [2024-11-05 12:51:29.990308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.758 [2024-11-05 12:51:29.990337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:00.758 [2024-11-05 12:51:29.990362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:00.758 [2024-11-05 12:51:29.990585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:00.758 [2024-11-05 12:51:29.990792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:00.758 [2024-11-05 12:51:29.990813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:00.758 [2024-11-05 12:51:29.990826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:00.758 [2024-11-05 12:51:29.990838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.017 [2024-11-05 12:51:30.004172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.017 [2024-11-05 12:51:30.004565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.017 [2024-11-05 12:51:30.004599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.017 [2024-11-05 12:51:30.004619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.017 [2024-11-05 12:51:30.004837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.017 [2024-11-05 12:51:30.005068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.017 [2024-11-05 12:51:30.005092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.005107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.005122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.017653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.018027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.018058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.018076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.018333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.018533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.018555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.018568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.018581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.031288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.031744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.031797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.031826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.032178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.032549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.032580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.032602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.032618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.044638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.044955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.044987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.045005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.045219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.045447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.045468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.045481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.045493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.058046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.058446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.058476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.058493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.058736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.058982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.059006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.059021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.059034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.071415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.071832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.071869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.071888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.072117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.072351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.072371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.072389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.072402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.084785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.085167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.085198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.085215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.085456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.085665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.085685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.085697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.085710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.098097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.098518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.098546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.098563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.098784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.099028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.099050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.099063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.099076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.111526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.111913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.111943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.111960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.112189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.112398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.112418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.112431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.112443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.124962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.018 [2024-11-05 12:51:30.125369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.018 [2024-11-05 12:51:30.125399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.018 [2024-11-05 12:51:30.125416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.018 [2024-11-05 12:51:30.125659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.018 [2024-11-05 12:51:30.125902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.018 [2024-11-05 12:51:30.125925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.018 [2024-11-05 12:51:30.125940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.018 [2024-11-05 12:51:30.125953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.018 [2024-11-05 12:51:30.138387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.138768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.138797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.138814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.139067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.139295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.139317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.139330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.139342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.151743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.152165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.152195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.152211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.152453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.152660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.152679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.152692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.152704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.165182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.165588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.165617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.165639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.165871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.166076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.166098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.166112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.166124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.178874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.179224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.179255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.179272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.179503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.179735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.179755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.179769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.179782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.192249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.192570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.192599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.192616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.192845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.193089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.193111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.193124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.193136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.205591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.205980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.206010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.206027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.206256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.206468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.206489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.206501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.206514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.219064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.219399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.219429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.219445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.219667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.219919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.219941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.219955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.219968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.232449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.232891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.232921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.232938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.233179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.233407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.233428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.233442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.019 [2024-11-05 12:51:30.233454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.019 [2024-11-05 12:51:30.245760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.019 [2024-11-05 12:51:30.246083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.019 [2024-11-05 12:51:30.246128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.019 [2024-11-05 12:51:30.246145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.019 [2024-11-05 12:51:30.246384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.019 [2024-11-05 12:51:30.246609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.019 [2024-11-05 12:51:30.246630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.019 [2024-11-05 12:51:30.246649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.020 [2024-11-05 12:51:30.246663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.278 [2024-11-05 12:51:30.259268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.278 [2024-11-05 12:51:30.259704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.278 [2024-11-05 12:51:30.259734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.278 [2024-11-05 12:51:30.259751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.278 [2024-11-05 12:51:30.259991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.278 [2024-11-05 12:51:30.260230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.278 [2024-11-05 12:51:30.260250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.278 [2024-11-05 12:51:30.260263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.278 [2024-11-05 12:51:30.260275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.278 [2024-11-05 12:51:30.272610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.278 [2024-11-05 12:51:30.273008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.278 [2024-11-05 12:51:30.273038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.278 [2024-11-05 12:51:30.273055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.278 [2024-11-05 12:51:30.273298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.278 [2024-11-05 12:51:30.273491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.278 [2024-11-05 12:51:30.273511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.278 [2024-11-05 12:51:30.273523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.278 [2024-11-05 12:51:30.273535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.278 5608.25 IOPS, 21.91 MiB/s [2024-11-05T11:51:30.516Z] [2024-11-05 12:51:30.285954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.286362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.286391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.286407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.286629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.286839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.286883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.286898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.286910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.299354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.299768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.299797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.299814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.300064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.300295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.300315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.300328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.300341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.312648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.313064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.313093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.313110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.313353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.313552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.313571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.313599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.313612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.326183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.326588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.326617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.326634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.326889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.327115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.327137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.327151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.327165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.339616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.339990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.340020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.340042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.340297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.340504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.340526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.340539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.340551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.353062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.353426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.353454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.353471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.353694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.353929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.353950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.353964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.353976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.366286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.366652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.366681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.366697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.366950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.367197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.367217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.367230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.279 [2024-11-05 12:51:30.367242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.279 [2024-11-05 12:51:30.379641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.279 [2024-11-05 12:51:30.380020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.279 [2024-11-05 12:51:30.380049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.279 [2024-11-05 12:51:30.380067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.279 [2024-11-05 12:51:30.380306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.279 [2024-11-05 12:51:30.380522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.279 [2024-11-05 12:51:30.380542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.279 [2024-11-05 12:51:30.380554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.380566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.393067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.393500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.393529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.393546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.393775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.394026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.394047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.394062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.394076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.406514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.406895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.406925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.406942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.407170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.407385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.407405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.407418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.407444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.419922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.420275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.420304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.420320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.420567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.420797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.420820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.420839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.420853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.433398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.433721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.433750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.433767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.434034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.434267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.434287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.434299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.434311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.446857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.447284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.447313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.447330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.447557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.447765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.447785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.447798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.447810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.460310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.460728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.460758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.460776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.461017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.461255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.461275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.461288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.461300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.473666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.474051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.474081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.474099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.280 [2024-11-05 12:51:30.474349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.280 [2024-11-05 12:51:30.474543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.280 [2024-11-05 12:51:30.474564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.280 [2024-11-05 12:51:30.474577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.280 [2024-11-05 12:51:30.474589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.280 [2024-11-05 12:51:30.486990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.280 [2024-11-05 12:51:30.487331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.280 [2024-11-05 12:51:30.487359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.280 [2024-11-05 12:51:30.487375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.281 [2024-11-05 12:51:30.487590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.281 [2024-11-05 12:51:30.487799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.281 [2024-11-05 12:51:30.487818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.281 [2024-11-05 12:51:30.487831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.281 [2024-11-05 12:51:30.487867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.281 [2024-11-05 12:51:30.500222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.281 [2024-11-05 12:51:30.500512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.281 [2024-11-05 12:51:30.500554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.281 [2024-11-05 12:51:30.500571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.281 [2024-11-05 12:51:30.500786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.281 [2024-11-05 12:51:30.501027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.281 [2024-11-05 12:51:30.501049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.281 [2024-11-05 12:51:30.501062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.281 [2024-11-05 12:51:30.501075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.281 [2024-11-05 12:51:30.513429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.281 [2024-11-05 12:51:30.513778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.281 [2024-11-05 12:51:30.513807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.281 [2024-11-05 12:51:30.513828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.281 [2024-11-05 12:51:30.514067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.281 [2024-11-05 12:51:30.514315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.281 [2024-11-05 12:51:30.514337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.281 [2024-11-05 12:51:30.514351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.281 [2024-11-05 12:51:30.514365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.526871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.527251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.527278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.527294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.527503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.527697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.527716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.527729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.527741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.540096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.540526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.540555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.540573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.540809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.541053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.541076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.541090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.541102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.553317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.553632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.553661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.553677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.553903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.554127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.554164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.554177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.554189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.566626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.567024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.567041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.567284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.567498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.567519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.567532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.567545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.579819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.580173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.580203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.580219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.580455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.580664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.580685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.580698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.580710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.593076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.593444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.593472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.593489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.593725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.593976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.593999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.594018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.594032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.606431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.606845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.606881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.606899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.607140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.607369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.607389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.607402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.607414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.619726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.620171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.620200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.620216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.620460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.620667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.620687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.620700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.540 [2024-11-05 12:51:30.620712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.540 [2024-11-05 12:51:30.632913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.540 [2024-11-05 12:51:30.633252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.540 [2024-11-05 12:51:30.633280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.540 [2024-11-05 12:51:30.633297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.540 [2024-11-05 12:51:30.633519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.540 [2024-11-05 12:51:30.633744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.540 [2024-11-05 12:51:30.633764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.540 [2024-11-05 12:51:30.633777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.633789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.646110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.646546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.646576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.646593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.646817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.647055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.647076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.647089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.647102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.659417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.659831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.659868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.659887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.660128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.660336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.660356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.660369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.660380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.672761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.673142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.673172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.673189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.673433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.673641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.673661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.673673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.673685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.686058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.686447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.686475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.686497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.686731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.686953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.686975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.686988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.687001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.699391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.699744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.699773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.699789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.700030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.700279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.700300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.700313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.700326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.712660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.713102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.713132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.713148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.713391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.713599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.713619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.713632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.713644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.725975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.726352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.726381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.726397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.726641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.726880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.726902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.726931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.726943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.739251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.739602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.739631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.739647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.739899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.740118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.740138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.740152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.740178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.752516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.752909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.752939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.752957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.753187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.753395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.541 [2024-11-05 12:51:30.753415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.541 [2024-11-05 12:51:30.753427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.541 [2024-11-05 12:51:30.753441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.541 [2024-11-05 12:51:30.765872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.541 [2024-11-05 12:51:30.766212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.541 [2024-11-05 12:51:30.766241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.541 [2024-11-05 12:51:30.766256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.541 [2024-11-05 12:51:30.766478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.541 [2024-11-05 12:51:30.766685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.542 [2024-11-05 12:51:30.766705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.542 [2024-11-05 12:51:30.766723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.542 [2024-11-05 12:51:30.766736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.542 [2024-11-05 12:51:30.779514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.779866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.779896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.779914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.780128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.780387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.780408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.780421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.780433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.792786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.793268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.793296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.793312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.793528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.793731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.793750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.793762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.793774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.806050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.806471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.806538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.806554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.806784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.807037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.807060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.807074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.807087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.819056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.819406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.819434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.819450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.819685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.819932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.819953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.819966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.819979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.832200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.832609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.832637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.832653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.832902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.833114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.833133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.833145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.833158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.845334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.845650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.845678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.845695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.845922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.846122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.846143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.846171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.846183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.858467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.858811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.858839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.858870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.859129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.859333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.859353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.859366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.859378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.871422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.871838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.871875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.871893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.872129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.872331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.872351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.872363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.872375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.884406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.884762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.884791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.884807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.885072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.885299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.885319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.885331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.801 [2024-11-05 12:51:30.885343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.801 [2024-11-05 12:51:30.897439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.801 [2024-11-05 12:51:30.897846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.801 [2024-11-05 12:51:30.897899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.801 [2024-11-05 12:51:30.897916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.801 [2024-11-05 12:51:30.898150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.801 [2024-11-05 12:51:30.898357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.801 [2024-11-05 12:51:30.898378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.801 [2024-11-05 12:51:30.898390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.898401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.910459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.910802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.910830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.910847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.911112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.911332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.911350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.911362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.911373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.923448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.923759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.923787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.923803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.924065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.924274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.924294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.924308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.924320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.936792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.937239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.937301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.937541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.937763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.937783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.937801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.937814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.950042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.950387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.950415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.950431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.950647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.950849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.950895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.950910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.950923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.963188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.963594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.963623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.963639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.963885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.964098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.964119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.964132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.964145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.976188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.976603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.976633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.976649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.976895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.977088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.977107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.977120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.977132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:30.989405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:30.989811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:30.989839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:30.989855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:30.990121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:30.990341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:30.990360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:30.990373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:30.990385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:31.002640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:31.003142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:31.003195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:31.003212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:31.003469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:31.003656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:31.003674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:31.003686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:31.003698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:31.015895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:31.016378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:31.016434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:31.016451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:31.016697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:31.016957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:31.016979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.802 [2024-11-05 12:51:31.016993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.802 [2024-11-05 12:51:31.017006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:01.802 [2024-11-05 12:51:31.028989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:01.802 [2024-11-05 12:51:31.029405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.802 [2024-11-05 12:51:31.029459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:01.802 [2024-11-05 12:51:31.029480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:01.802 [2024-11-05 12:51:31.029724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:01.802 [2024-11-05 12:51:31.029939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:01.802 [2024-11-05 12:51:31.029959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:01.803 [2024-11-05 12:51:31.029972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:01.803 [2024-11-05 12:51:31.029984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.042425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.042765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.042793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.042809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.043055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.043289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.043308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.043321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.043332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.055688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.056097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.056165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.056181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.056424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.056628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.056647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.056660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.056672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.068927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.069314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.069343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.069359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.069594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.069806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.069826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.069838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.069872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.082120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.082558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.082586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.082602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.082837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.083039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.083060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.083073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.083085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.095365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.095816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.095877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.095895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.096138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.096341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.096360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.096373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.096384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.108543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.108856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.108935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.108951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.109186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.109390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.109410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.109427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.109438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.121699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.122179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.122229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.122245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.122487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.122674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.122693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.122706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.122718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.134833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.135266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.135318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.135334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.135577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.135763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.135783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.135796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.135808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.062 [2024-11-05 12:51:31.148103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.062 [2024-11-05 12:51:31.148531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.062 [2024-11-05 12:51:31.148560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.062 [2024-11-05 12:51:31.148576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.062 [2024-11-05 12:51:31.148809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.062 [2024-11-05 12:51:31.149033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.062 [2024-11-05 12:51:31.149055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.062 [2024-11-05 12:51:31.149068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.062 [2024-11-05 12:51:31.149080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.161205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.161549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.161577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.161593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.161828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.162064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.162087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.162100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.162113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.174252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.174603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.174633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.174650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.174905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.175116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.175138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.175152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.175165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.187610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.188084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.188115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.188132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.188386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.188585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.188606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.188620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.188632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.200651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.201006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.201035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.201056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.201294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.201496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.201516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.201529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.201541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.213642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.213993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.214021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.214037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.214269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.214472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.214492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.214504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.214515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.226711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.227111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.227167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.227183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.227422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.227609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.227628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.227641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.227653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.239736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.240137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.240188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.240204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.240448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.240640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.240658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.240670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.240681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.252782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.253195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.253224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.253240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.253474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.253677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.253695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.253707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.253719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.265857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.266211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.266241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.266257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.266491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.266694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.266714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.266727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.063 [2024-11-05 12:51:31.266738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.063 [2024-11-05 12:51:31.278967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.063 [2024-11-05 12:51:31.279381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.063 [2024-11-05 12:51:31.279410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.063 [2024-11-05 12:51:31.279427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.063 [2024-11-05 12:51:31.279669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.063 [2024-11-05 12:51:31.279898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.063 [2024-11-05 12:51:31.279919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.063 [2024-11-05 12:51:31.279940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.064 [2024-11-05 12:51:31.279953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.064 4486.60 IOPS, 17.53 MiB/s [2024-11-05T11:51:31.302Z] [2024-11-05 12:51:31.292089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.064 [2024-11-05 12:51:31.292435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.064 [2024-11-05 12:51:31.292463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.064 [2024-11-05 12:51:31.292479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.064 [2024-11-05 12:51:31.292715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.064 [2024-11-05 12:51:31.292946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.064 [2024-11-05 12:51:31.292966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.064 [2024-11-05 12:51:31.292979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.064 [2024-11-05 12:51:31.292992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.305200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.305659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.305710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.305726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.305995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.306253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.306273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.306285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.306297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.318356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.318747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.318802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.318818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.319074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.319280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.319298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.319309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.319321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.331428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.331769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.331797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.331813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.332054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.332260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.332278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.332290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.332302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.344533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.344939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.344968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.344984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.345219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.345423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.345443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.345456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.345467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.357540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.357960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.357976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.358221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.358424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.358444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.358457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.358469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.370604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.370944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.370973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.370994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.371223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.371410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.371430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.371442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.371455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.383681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.384105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.323 [2024-11-05 12:51:31.384134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.323 [2024-11-05 12:51:31.384150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.323 [2024-11-05 12:51:31.384385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.323 [2024-11-05 12:51:31.384588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.323 [2024-11-05 12:51:31.384607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.323 [2024-11-05 12:51:31.384619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.323 [2024-11-05 12:51:31.384631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.323 [2024-11-05 12:51:31.396789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.323 [2024-11-05 12:51:31.397204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.397233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.397249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.397484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.397671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.397689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.397701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.397712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.409795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.410145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.410174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.410190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.410425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.410633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.410653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.410666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.410677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.422829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.423176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.423204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.423235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.423450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.423653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.423672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.423685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.423697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.436102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.436522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.436550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.436566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.436801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.437034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.437071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.437085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.437098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.449267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.449611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.449639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.449655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.449900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.450100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.450119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.450137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.450150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.462475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.462830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.462881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.462914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.463157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.463362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.463380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.463393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.463404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.475813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.476193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.476222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.476239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.476479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.476687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.476707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.476720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.476732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.488937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.489368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.489397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.489413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.489647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.489849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.489880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.489901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.324 [2024-11-05 12:51:31.489913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.324 [2024-11-05 12:51:31.502066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.324 [2024-11-05 12:51:31.502427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.324 [2024-11-05 12:51:31.502455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.324 [2024-11-05 12:51:31.502471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.324 [2024-11-05 12:51:31.502707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.324 [2024-11-05 12:51:31.502937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.324 [2024-11-05 12:51:31.502957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.324 [2024-11-05 12:51:31.502970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.325 [2024-11-05 12:51:31.502982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.325 [2024-11-05 12:51:31.515274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.325 [2024-11-05 12:51:31.515666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.325 [2024-11-05 12:51:31.515719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.325 [2024-11-05 12:51:31.515734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.325 [2024-11-05 12:51:31.515974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.325 [2024-11-05 12:51:31.516182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.325 [2024-11-05 12:51:31.516201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.325 [2024-11-05 12:51:31.516214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.325 [2024-11-05 12:51:31.516225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.325 [2024-11-05 12:51:31.528475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.325 [2024-11-05 12:51:31.528818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.325 [2024-11-05 12:51:31.528845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.325 [2024-11-05 12:51:31.528869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.325 [2024-11-05 12:51:31.529120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.325 [2024-11-05 12:51:31.529341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.325 [2024-11-05 12:51:31.529360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.325 [2024-11-05 12:51:31.529372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.325 [2024-11-05 12:51:31.529384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.325 [2024-11-05 12:51:31.541825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.325 [2024-11-05 12:51:31.542261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.325 [2024-11-05 12:51:31.542313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.325 [2024-11-05 12:51:31.542334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.325 [2024-11-05 12:51:31.542579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.325 [2024-11-05 12:51:31.542770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.325 [2024-11-05 12:51:31.542790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.325 [2024-11-05 12:51:31.542803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.325 [2024-11-05 12:51:31.542815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.325 [2024-11-05 12:51:31.555000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.325 [2024-11-05 12:51:31.555412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.325 [2024-11-05 12:51:31.555442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.325 [2024-11-05 12:51:31.555458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.325 [2024-11-05 12:51:31.555698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.325 [2024-11-05 12:51:31.555948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.325 [2024-11-05 12:51:31.555979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.325 [2024-11-05 12:51:31.555993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.325 [2024-11-05 12:51:31.556007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.584 [2024-11-05 12:51:31.568506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.584 [2024-11-05 12:51:31.568925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.584 [2024-11-05 12:51:31.568954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.584 [2024-11-05 12:51:31.568970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.584 [2024-11-05 12:51:31.569208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.584 [2024-11-05 12:51:31.569410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.584 [2024-11-05 12:51:31.569429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.584 [2024-11-05 12:51:31.569441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.584 [2024-11-05 12:51:31.569453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.584 [2024-11-05 12:51:31.581791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.584 [2024-11-05 12:51:31.582161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.584 [2024-11-05 12:51:31.582192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.584 [2024-11-05 12:51:31.582223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.584 [2024-11-05 12:51:31.582439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.584 [2024-11-05 12:51:31.582655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.584 [2024-11-05 12:51:31.582675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.584 [2024-11-05 12:51:31.582689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.584 [2024-11-05 12:51:31.582701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.584 [2024-11-05 12:51:31.594982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.584 [2024-11-05 12:51:31.595369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.584 [2024-11-05 12:51:31.595396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.584 [2024-11-05 12:51:31.595411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.584 [2024-11-05 12:51:31.595619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.595822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.595857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.595883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.595896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.608066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.608457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.608528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.608771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.608988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.609009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.609022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.609034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.621273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.621614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.621642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.621658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.621905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.622118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.622138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.622158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.622186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.634262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.634606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.634634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.634650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.634896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.635089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.635108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.635120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.635132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.647422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.647767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.647796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.647812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.648077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.648288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.648309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.648321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.648347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.660610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.661018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.661047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.661063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.661304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.661507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.661526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.661539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.661550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.673816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.674301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.674353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.674369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.674613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.674800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.674820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.674833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.674870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.686936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.687296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.687370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.687386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.687621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.687823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.687842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.687855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.687903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.700237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.700583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.700612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.700628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.700848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.701082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.701101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.701114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.701126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.713317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.713722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.713751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.713772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.714041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.585 [2024-11-05 12:51:31.714269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.585 [2024-11-05 12:51:31.714290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.585 [2024-11-05 12:51:31.714302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.585 [2024-11-05 12:51:31.714314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.585 [2024-11-05 12:51:31.726367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.585 [2024-11-05 12:51:31.726772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.585 [2024-11-05 12:51:31.726800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.585 [2024-11-05 12:51:31.726816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.585 [2024-11-05 12:51:31.727060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.727266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.727287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.727299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.727311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.739364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.739756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.739808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.739824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.740064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.740271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.740289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.740302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.740314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.752395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.752854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.752913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.752929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.753166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.753358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.753391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.753403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.753415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.765570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.765884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.765912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.765929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.766138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.766342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.766361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.766374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.766386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.778635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.779007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.779036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.779051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.779266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.779470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.779489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.779501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.779513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.791888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.792274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.792303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.792320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.792561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.792753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.792772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.792791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.792803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.805121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.805531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.805575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.805591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.805820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.806043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.806064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.806078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.806091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.586 [2024-11-05 12:51:31.818407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.586 [2024-11-05 12:51:31.818769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.586 [2024-11-05 12:51:31.818797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.586 [2024-11-05 12:51:31.818813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.586 [2024-11-05 12:51:31.819057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.586 [2024-11-05 12:51:31.819265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.586 [2024-11-05 12:51:31.819284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.586 [2024-11-05 12:51:31.819296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.586 [2024-11-05 12:51:31.819308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.831914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.832263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.832291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.832308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.832540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.832728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.832746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.832758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.832770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.844924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.845299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.845327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.845343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.845557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.845760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.845779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.845792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.845804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.857997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.858306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.858334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.858350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.858565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.858768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.858787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.858799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.858811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.871209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.871551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.871579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.871595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.871829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.872062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.872083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.872095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.872108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.884275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.884581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.884608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.884629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.884845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.885077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.885097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.885109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.885121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 811910 Killed "${NVMF_APP[@]}" "$@" 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=812861 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 812861 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 812861 ']' 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.846 [2024-11-05 12:51:31.897813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:02.846 12:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:02.846 [2024-11-05 12:51:31.898195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.898223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.898240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.898483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.898698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.898719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.898732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.898744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.911335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.911694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.911721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.911737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.911974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.912204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.912224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.912236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.912248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.846 [2024-11-05 12:51:31.924681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.846 [2024-11-05 12:51:31.925046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.846 [2024-11-05 12:51:31.925075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.846 [2024-11-05 12:51:31.925092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.846 [2024-11-05 12:51:31.925362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.846 [2024-11-05 12:51:31.925574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.846 [2024-11-05 12:51:31.925594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.846 [2024-11-05 12:51:31.925618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.846 [2024-11-05 12:51:31.925629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:31.937898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:31.938331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:31.938369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:31.938386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:31.938635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:31.938872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:31.938894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:31.938908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:31.938920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:31.943951] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:37:02.847 [2024-11-05 12:51:31.944009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.847 [2024-11-05 12:51:31.951295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:31.951602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:31.951644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:31.951661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:31.951895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:31.952094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:31.952114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:31.952127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:31.952139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:31.964630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:31.965011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:31.965048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:31.965064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:31.965288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:31.965496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:31.965515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:31.965528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:31.965539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:31.977857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:31.978249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:31.978278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:31.978297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:31.978537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:31.978746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:31.978766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:31.978778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:31.978789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:31.991292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:31.991648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:31.991677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:31.991707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:31.991980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:31.992207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:31.992242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:31.992256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:31.992268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:32.004546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:32.004953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:32.005011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:32.005027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:32.005271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:32.005464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:32.005483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:32.005496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:32.005508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:32.017721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:32.018048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:32.018077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:32.018093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:32.018116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:02.847 [2024-11-05 12:51:32.018315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:32.018543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:32.018563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:32.018576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:32.018587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:32.031183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:32.031735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:32.031783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:32.031803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:32.032050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.847 [2024-11-05 12:51:32.032296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.847 [2024-11-05 12:51:32.032317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.847 [2024-11-05 12:51:32.032332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.847 [2024-11-05 12:51:32.032346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.847 [2024-11-05 12:51:32.044617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.847 [2024-11-05 12:51:32.045012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.847 [2024-11-05 12:51:32.045054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.847 [2024-11-05 12:51:32.045071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.847 [2024-11-05 12:51:32.045323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.848 [2024-11-05 12:51:32.045518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.848 [2024-11-05 12:51:32.045537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.848 [2024-11-05 12:51:32.045551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.848 [2024-11-05 12:51:32.045564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.848 [2024-11-05 12:51:32.058036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.848 [2024-11-05 12:51:32.058397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.848 [2024-11-05 12:51:32.058426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.848 [2024-11-05 12:51:32.058443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.848 [2024-11-05 12:51:32.058664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.848 [2024-11-05 12:51:32.058901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.848 [2024-11-05 12:51:32.058923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.848 [2024-11-05 12:51:32.058938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.848 [2024-11-05 12:51:32.058950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.848 [2024-11-05 12:51:32.063969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.848 [2024-11-05 12:51:32.064001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.848 [2024-11-05 12:51:32.064025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.848 [2024-11-05 12:51:32.064036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.848 [2024-11-05 12:51:32.064046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.848 [2024-11-05 12:51:32.065380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.848 [2024-11-05 12:51:32.065443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:02.848 [2024-11-05 12:51:32.065446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.848 [2024-11-05 12:51:32.071620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:02.848 [2024-11-05 12:51:32.072134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.848 [2024-11-05 12:51:32.072180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:02.848 [2024-11-05 12:51:32.072199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:02.848 [2024-11-05 12:51:32.072437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:02.848 [2024-11-05 12:51:32.072661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:02.848 [2024-11-05 12:51:32.072682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:02.848 [2024-11-05 12:51:32.072697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:02.848 [2024-11-05 12:51:32.072712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:02.848 [2024-11-05 12:51:32.085200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.107 [2024-11-05 12:51:32.085682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.107 [2024-11-05 12:51:32.085731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.107 [2024-11-05 12:51:32.085750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.107 [2024-11-05 12:51:32.085983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.107 [2024-11-05 12:51:32.086221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.107 [2024-11-05 12:51:32.086243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.107 [2024-11-05 12:51:32.086260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.107 [2024-11-05 12:51:32.086276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.107 [2024-11-05 12:51:32.098790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.107 [2024-11-05 12:51:32.099365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.107 [2024-11-05 12:51:32.099415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.107 [2024-11-05 12:51:32.099436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.107 [2024-11-05 12:51:32.099673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.107 [2024-11-05 12:51:32.099925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.107 [2024-11-05 12:51:32.099949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.099968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.099984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 [2024-11-05 12:51:32.112328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.112900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.112951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.112981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.113235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.113445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.113466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.113482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.113497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 [2024-11-05 12:51:32.125829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.126403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.126450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.126470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.126721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.126957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.126980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.126995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.127012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 [2024-11-05 12:51:32.139362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.139907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.139948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.139969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.140225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.140434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.140455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.140470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.140486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 [2024-11-05 12:51:32.152999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.153367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.153403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.153420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.153650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.153912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.153935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.153949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.153963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 [2024-11-05 12:51:32.166594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.166981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.167011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.167028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.167242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.167470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.167491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.167505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.167518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.108 [2024-11-05 12:51:32.180185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.180553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.180583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.180600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.180824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.181053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.181076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.181091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.181105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 [2024-11-05 12:51:32.193767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.194123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.194155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.194173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.194408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.194639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.108 [2024-11-05 12:51:32.194661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.108 [2024-11-05 12:51:32.194676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.108 [2024-11-05 12:51:32.194698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.108 [2024-11-05 12:51:32.200016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.108 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.108 [2024-11-05 12:51:32.207388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.108 [2024-11-05 12:51:32.207699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.108 [2024-11-05 12:51:32.207742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.108 [2024-11-05 12:51:32.207759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.108 [2024-11-05 12:51:32.208009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.108 [2024-11-05 12:51:32.208257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.109 [2024-11-05 12:51:32.208277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.109 [2024-11-05 12:51:32.208290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.109 [2024-11-05 12:51:32.208303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.109 [2024-11-05 12:51:32.220800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.109 [2024-11-05 12:51:32.221304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.109 [2024-11-05 12:51:32.221338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.109 [2024-11-05 12:51:32.221356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.109 [2024-11-05 12:51:32.221590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.109 [2024-11-05 12:51:32.221806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.109 [2024-11-05 12:51:32.221852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.109 [2024-11-05 12:51:32.221878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.109 [2024-11-05 12:51:32.221906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.109 [2024-11-05 12:51:32.234430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.109 [2024-11-05 12:51:32.234886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.109 [2024-11-05 12:51:32.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.109 [2024-11-05 12:51:32.234938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.109 [2024-11-05 12:51:32.235175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.109 [2024-11-05 12:51:32.235382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.109 [2024-11-05 12:51:32.235403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.109 [2024-11-05 12:51:32.235418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.109 [2024-11-05 12:51:32.235440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.109 Malloc0 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.109 [2024-11-05 12:51:32.248232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.109 [2024-11-05 12:51:32.248685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:03.109 [2024-11-05 12:51:32.248713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f2970 with addr=10.0.0.2, port=4420 00:37:03.109 [2024-11-05 12:51:32.248734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f2970 is same with the state(6) to be set 00:37:03.109 [2024-11-05 12:51:32.248957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f2970 (9): Bad file descriptor 00:37:03.109 [2024-11-05 12:51:32.249193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:03.109 [2024-11-05 12:51:32.249228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:03.109 [2024-11-05 12:51:32.249242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:03.109 [2024-11-05 12:51:32.249265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:03.109 [2024-11-05 12:51:32.258594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.109 [2024-11-05 12:51:32.261820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.109 12:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 812195 00:37:03.109 3738.83 IOPS, 14.60 MiB/s [2024-11-05T11:51:32.347Z] [2024-11-05 12:51:32.329715] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:05.417 4377.86 IOPS, 17.10 MiB/s [2024-11-05T11:51:35.589Z] 4909.75 IOPS, 19.18 MiB/s [2024-11-05T11:51:36.524Z] 5322.11 IOPS, 20.79 MiB/s [2024-11-05T11:51:37.521Z] 5659.60 IOPS, 22.11 MiB/s [2024-11-05T11:51:38.455Z] 5936.00 IOPS, 23.19 MiB/s [2024-11-05T11:51:39.388Z] 6159.00 IOPS, 24.06 MiB/s [2024-11-05T11:51:40.321Z] 6355.77 IOPS, 24.83 MiB/s [2024-11-05T11:51:41.693Z] 6516.43 IOPS, 25.45 MiB/s 00:37:12.455 Latency(us) 00:37:12.455 [2024-11-05T11:51:41.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.455 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:12.455 Verification LBA range: start 0x0 length 0x4000 00:37:12.455 Nvme1n1 : 15.01 6655.73 26.00 10056.94 0.00 7635.69 576.47 20486.07 00:37:12.455 [2024-11-05T11:51:41.693Z] =================================================================================================================== 00:37:12.455 [2024-11-05T11:51:41.693Z] Total : 6655.73 26.00 10056.94 0.00 7635.69 576.47 20486.07 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:12.455 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:12.456 rmmod nvme_tcp 00:37:12.456 rmmod nvme_fabrics 00:37:12.456 rmmod nvme_keyring 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 812861 ']' 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 812861 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 812861 ']' 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 812861 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 812861 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 812861' 00:37:12.456 killing process with pid 812861 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 812861 00:37:12.456 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 812861 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.715 12:51:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:15.247 00:37:15.247 real 0m22.556s 00:37:15.247 user 0m59.525s 00:37:15.247 sys 0m4.574s 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:15.247 ************************************ 00:37:15.247 END TEST nvmf_bdevperf 00:37:15.247 ************************************ 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:15.247 ************************************ 00:37:15.247 START TEST nvmf_target_disconnect 00:37:15.247 ************************************ 00:37:15.247 12:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:15.247 * Looking for test storage... 00:37:15.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:15.247 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:15.247 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:37:15.247 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:15.247 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:15.247 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:15.247 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:15.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.248 --rc genhtml_branch_coverage=1 00:37:15.248 --rc genhtml_function_coverage=1 00:37:15.248 --rc genhtml_legend=1 00:37:15.248 --rc geninfo_all_blocks=1 00:37:15.248 --rc geninfo_unexecuted_blocks=1 00:37:15.248 00:37:15.248 ' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:15.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.248 --rc genhtml_branch_coverage=1 00:37:15.248 --rc genhtml_function_coverage=1 00:37:15.248 --rc genhtml_legend=1 00:37:15.248 --rc geninfo_all_blocks=1 00:37:15.248 --rc geninfo_unexecuted_blocks=1 00:37:15.248 00:37:15.248 ' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:15.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.248 --rc genhtml_branch_coverage=1 00:37:15.248 --rc genhtml_function_coverage=1 00:37:15.248 --rc genhtml_legend=1 00:37:15.248 --rc geninfo_all_blocks=1 00:37:15.248 --rc geninfo_unexecuted_blocks=1 00:37:15.248 00:37:15.248 ' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:15.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:15.248 --rc genhtml_branch_coverage=1 00:37:15.248 --rc genhtml_function_coverage=1 00:37:15.248 --rc genhtml_legend=1 00:37:15.248 --rc geninfo_all_blocks=1 00:37:15.248 --rc geninfo_unexecuted_blocks=1 00:37:15.248 00:37:15.248 ' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:15.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:15.248 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.249 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.249 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.249 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:15.249 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:15.249 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:37:15.249 12:51:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:17.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:17.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.148 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:17.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:17.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:17.149 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:17.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:17.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:37:17.407 00:37:17.407 --- 10.0.0.2 ping statistics --- 00:37:17.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.407 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:17.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:17.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:37:17.407 00:37:17.407 --- 10.0.0.1 ping statistics --- 00:37:17.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.407 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:17.407 ************************************ 00:37:17.407 START TEST nvmf_target_disconnect_tc1 00:37:17.407 ************************************ 00:37:17.407 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:17.408 [2024-11-05 12:51:46.591302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.408 [2024-11-05 12:51:46.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa62610 with addr=10.0.0.2, port=4420 00:37:17.408 [2024-11-05 12:51:46.591426] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:17.408 [2024-11-05 12:51:46.591450] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:17.408 [2024-11-05 12:51:46.591464] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:37:17.408 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:17.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:17.408 Initializing NVMe Controllers 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:17.408 00:37:17.408 real 0m0.097s 00:37:17.408 user 0m0.049s 00:37:17.408 sys 0m0.046s 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:17.408 ************************************ 00:37:17.408 END TEST nvmf_target_disconnect_tc1 00:37:17.408 ************************************ 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:17.408 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:17.666 ************************************ 00:37:17.666 START TEST nvmf_target_disconnect_tc2 00:37:17.666 ************************************ 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=816016 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 816016 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 816016 ']' 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:17.666 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.666 [2024-11-05 12:51:46.704988] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:37:17.666 [2024-11-05 12:51:46.705086] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.666 [2024-11-05 12:51:46.781880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:17.666 [2024-11-05 12:51:46.831703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.666 [2024-11-05 12:51:46.831788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.666 [2024-11-05 12:51:46.831802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.666 [2024-11-05 12:51:46.831813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.666 [2024-11-05 12:51:46.831822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.666 [2024-11-05 12:51:46.833478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:17.666 [2024-11-05 12:51:46.833546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:17.666 [2024-11-05 12:51:46.833610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:17.666 [2024-11-05 12:51:46.833613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.924 12:51:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 Malloc0 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 [2024-11-05 12:51:47.008637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 [2024-11-05 12:51:47.036960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=816099 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:17.924 12:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:19.822 12:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 816016 00:37:19.822 12:51:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Write completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 [2024-11-05 12:51:49.061819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.822 starting I/O failed 00:37:19.822 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 [2024-11-05 12:51:49.062147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 [2024-11-05 12:51:49.062455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Write completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 Read completed with error (sct=0, sc=8) 00:37:19.823 starting I/O failed 00:37:19.823 [2024-11-05 12:51:49.062736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:19.823 [2024-11-05 12:51:49.062900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:19.823 [2024-11-05 12:51:49.062934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:19.823 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.063934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.107 [2024-11-05 12:51:49.063961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.107 qpair failed and we were unable to recover it. 00:37:20.107 [2024-11-05 12:51:49.064048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.064167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.064304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.064444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.064590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.064744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.064894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.064922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.065890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.065978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.066837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.066972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.067123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.067269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.067383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.067492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.067684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.067872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.067913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.068897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.068926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.069012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.069041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.108 [2024-11-05 12:51:49.069123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.108 [2024-11-05 12:51:49.069151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.108 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.069291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.069319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.069431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.069459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.069575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.069603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.069719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.069746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.069870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.069898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.069989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.070895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.070980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.071923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.071953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.072955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.072984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.073294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.073437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.073610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.073731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.073852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.073981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.074009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.074087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.074115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.074215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.109 [2024-11-05 12:51:49.074243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.109 qpair failed and we were unable to recover it. 00:37:20.109 [2024-11-05 12:51:49.074391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.074418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.074540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.074567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.074717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.074744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.074855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.074899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.075901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.075984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.076951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.076978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.077894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.077922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.078922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.078962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.079106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.079147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.079295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.079324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.110 qpair failed and we were unable to recover it. 00:37:20.110 [2024-11-05 12:51:49.079471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.110 [2024-11-05 12:51:49.079498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.079638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.079665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.079749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.079776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.079890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.079918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.080894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.080922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.081903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.081991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.082135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.082271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.082408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.082573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.082705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.082886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.082914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.083895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.083923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.084011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.084038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.084126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.084153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.084260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.084287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.084374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.084401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.111 qpair failed and we were unable to recover it. 00:37:20.111 [2024-11-05 12:51:49.084543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.111 [2024-11-05 12:51:49.084570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.084676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.084704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.084826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.084854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.084944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.084972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.085870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.085901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.086910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.086939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.087089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.087228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.087256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.087400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.087427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.087571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.087600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.087716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.087744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.087888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.087916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.088029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.088056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.088138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.088165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.112 [2024-11-05 12:51:49.088253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.112 [2024-11-05 12:51:49.088280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.112 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.088367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.088395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.088546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.088677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.088705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.088790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.088818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.088925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.088952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.089979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.090943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.090971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.091081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.091109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.091251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.091279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.091416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.091443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.091557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.091585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.091729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.091757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.091873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.091901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.092892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.092979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.093006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.093145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.093172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.113 [2024-11-05 12:51:49.093260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.113 [2024-11-05 12:51:49.093287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.113 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.093399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.093426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.093569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.093596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.093707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.093734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.093875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.093915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.094043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.094154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.094433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.094633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.094771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.094896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.094979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.095880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.096964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.096991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.097083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.097110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.097245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.097273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.097389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.097417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.097531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.097558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.097689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.097882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.097922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.098044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.098078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.098214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.098259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.114 [2024-11-05 12:51:49.098348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.114 [2024-11-05 12:51:49.098375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.114 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.098490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.098517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.098632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.098679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.098793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.098820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.098917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.098945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.099842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.099879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.100967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.100997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.101141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.101283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.101402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.101552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.101728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.101990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.102939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.102969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.103082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.103110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.115 qpair failed and we were unable to recover it. 00:37:20.115 [2024-11-05 12:51:49.103224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.115 [2024-11-05 12:51:49.103252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.103368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.103396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.103537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.103564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.103676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.103704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.103849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.103882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.104870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.104899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.105043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.105201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.105352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.105492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.105634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.105775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.106901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.106930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.107083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.107251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.107420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.107530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.107703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.107892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.107992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.108021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.108128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.108155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.108237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.108263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.108401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.108428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.108611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.108638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.116 [2024-11-05 12:51:49.108766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.116 [2024-11-05 12:51:49.108807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.116 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.108944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.108974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.109089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.109116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.109284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.109335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.109478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.109521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.109612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.109639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.109761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.109790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.109958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.109998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.110152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.110181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.110299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.110326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.110466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.110493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.110608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.110656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.110780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.110807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.111955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.111996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.112123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.112151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.112303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.112331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.112445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.112472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.112591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.112618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.112710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.112737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.112877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.112905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.113049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.113189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.113301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.113468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.113634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.113799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.113962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.117 [2024-11-05 12:51:49.114003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.117 qpair failed and we were unable to recover it. 00:37:20.117 [2024-11-05 12:51:49.114121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.114271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.114412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.114558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.114775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.114941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.114969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.115921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.115948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.116850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.116954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.117873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.117901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.118937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.118 [2024-11-05 12:51:49.118966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.118 qpair failed and we were unable to recover it. 00:37:20.118 [2024-11-05 12:51:49.119175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.119227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.119316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.119344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.119445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.119474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.119617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.119644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.119802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.119843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.119980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.120097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.120239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.120394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.120553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.120685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.120840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.120874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.121883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.121912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.122027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.122055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.122184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.122329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.122357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.122551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.122578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.122769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.122797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.122884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.122912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.123872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.123913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.124013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.124042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.124164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.124191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.124309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.124336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.124412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.124439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.119 [2024-11-05 12:51:49.124524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.119 [2024-11-05 12:51:49.124551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.119 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.124661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.124688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.124846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.124893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.125052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.125225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.125449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.125592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.125725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.125877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.125985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.126013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.126135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.126167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.126274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.126302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.126445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.126472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.126593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.126633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.126767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.126806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.126971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.127000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.127141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.127169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.127347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.127396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.127571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.127631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.127724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.127752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.127877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.127905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.128964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.128991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.129912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.129939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.130050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.130091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.120 [2024-11-05 12:51:49.130234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.120 [2024-11-05 12:51:49.130273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.120 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.130369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.130404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.130547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.130574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.130661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.130689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.130806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.130834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.130938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.130967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.131932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.131963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.132115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.132143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.132246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.132273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.132415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.132465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.132602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.132629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.132756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.132796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.132927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.132955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.133855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.133904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.134854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.134902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.135026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.135055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.135201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.135229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.135348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.135375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.135610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.135637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.135733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.135760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.121 [2024-11-05 12:51:49.135847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.121 [2024-11-05 12:51:49.135883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.121 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.135997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.136139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.136258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.136413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.136552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.136694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.136854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.136900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.137875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.137998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.138114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.138235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.138403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.138545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.138683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.138855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.138890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.139877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.139906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.140046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.140073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.140155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.140188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.140331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.140359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.122 [2024-11-05 12:51:49.140498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.122 [2024-11-05 12:51:49.140525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.122 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.140647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.140676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.140808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.140848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.140981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.141122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.141281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.141506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.141669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.141804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.141923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.141950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.142937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.142965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.143881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.143909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.144921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.144950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.145066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.145095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.145237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.145264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.145378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.145406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.145621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.145678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.145819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.145846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.145934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.145962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.146059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.146086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.123 qpair failed and we were unable to recover it. 00:37:20.123 [2024-11-05 12:51:49.146230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.123 [2024-11-05 12:51:49.146257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.146391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.146419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.146565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.146594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.146692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.146731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.146832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.146879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.146999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.147027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.147119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.147165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.147336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.147371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.147552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.147579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.147669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.147696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.147812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.147995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.148022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.148159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.148186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.148295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.148322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.148409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.148441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.148588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.148642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.148802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.148843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.148979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.149123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.149291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.149544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.149708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.149824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.149936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.149963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.150936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.150965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.151895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.151922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.124 [2024-11-05 12:51:49.152019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.124 [2024-11-05 12:51:49.152046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.124 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.152132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.152159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.152244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.152271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.152365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.152393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.152553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.152599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.152741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.152772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.152890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.152917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.153031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.153057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.153171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.153284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.153312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.153464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.153526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.153679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.153727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.153874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.153902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.154047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.154159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.154331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.154556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.154725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.154840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.154988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.155158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.155274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.155410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.155576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.155716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.155869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.155899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.156879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.156906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.157022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.125 [2024-11-05 12:51:49.157049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.125 qpair failed and we were unable to recover it. 00:37:20.125 [2024-11-05 12:51:49.157134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.157160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.157244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.157271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.157415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.157441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.157552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.157578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.157703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.157742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.157835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.157873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.158095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.158239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.158380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.158580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.158702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.158879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.158992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.159849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.159984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.160847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.160983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.161895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.126 [2024-11-05 12:51:49.161981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.126 [2024-11-05 12:51:49.162008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.126 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.162153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.162244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.162272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.162448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.162573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.162608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.162723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.162750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.162871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.162900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.163041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.163238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.163447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.163647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.163759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.163904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.163996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.164100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.164276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.164390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.164526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.164677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.164850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.164909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.165060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.165088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.165327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.165362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.165563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.165590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.165701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.165728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.165840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.165878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.165996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.166153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.166264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.166433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.166580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.166723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.166855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.166905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.167001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.167029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.127 [2024-11-05 12:51:49.167117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.127 [2024-11-05 12:51:49.167144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.127 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.167259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.167287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.167422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.167449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.167536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.167564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.167706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.167733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.167883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.167915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.168027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.168067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.168287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.168335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.168508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.168558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.168698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.168731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.168816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.168844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.168974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.169117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.169224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.169377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.169567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.169770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.169932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.169961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.170126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.170260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.170400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.170568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.170712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.170870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.170986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.171128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.171294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.171460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.171631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.171741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.171857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.171892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.128 qpair failed and we were unable to recover it. 00:37:20.128 [2024-11-05 12:51:49.172839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.128 [2024-11-05 12:51:49.172880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.172980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.173092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.173229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.173385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.173606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.173779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.173937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.173965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.174082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.174110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.174194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.174221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.174333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.174362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.174503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.174550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.174690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.174722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.174838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.174878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.175864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.175892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.176898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.176982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.177009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.177122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.177229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.177257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.129 [2024-11-05 12:51:49.177337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.129 [2024-11-05 12:51:49.177364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.129 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.177472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.177499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.177624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.177664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.177785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.177813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.177963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.177990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.178131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.178182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.178355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.178408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.178543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.178588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.178669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.178697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.178809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.178839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.178965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.178993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.179923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.179950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.180966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.180994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.181090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.181116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.181254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.181281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.181478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.181529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.181689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.181715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.181800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.181830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.181961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.181990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.182079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.182107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.182227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.182255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.130 [2024-11-05 12:51:49.182393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.130 [2024-11-05 12:51:49.182420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.130 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.182603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.182656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.182801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.182915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.182943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.183917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.183958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.184951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.184981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.185926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.185954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.186830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.186864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.187005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.187055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.187198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.187225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.187320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.187359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.131 [2024-11-05 12:51:49.187606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.131 [2024-11-05 12:51:49.187643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.131 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.187777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.187803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.187923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.187951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.188936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.188964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.189928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.189958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.190059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.190100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.190250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.190279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.190360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.190389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.190541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.190570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.190701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.190729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.190866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.190906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.191029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.191058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.191202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.191239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.191521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.191548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.191639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.191668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.132 qpair failed and we were unable to recover it. 00:37:20.132 [2024-11-05 12:51:49.191784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.132 [2024-11-05 12:51:49.191812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.191908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.191935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.192043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.192069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.192237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.192398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.192452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.192666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.192712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.192837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.192875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.192997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.193958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.193999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.194169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.194290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.194453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.194595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.194767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.194900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.194995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.195969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.195998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.196119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.196148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.196317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.196369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.196652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.196773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.196801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.196932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.133 [2024-11-05 12:51:49.196960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.133 qpair failed and we were unable to recover it. 00:37:20.133 [2024-11-05 12:51:49.197078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.197107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.197252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.197300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.197436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.197484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.197703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.197729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.197809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.197835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.197941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.197981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.198102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.198130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.198268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.198295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.198492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.198549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.198691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.198720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.198874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.198903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.199961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.199988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.200104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.200132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.200242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.200269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.200379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.200406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.200579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.200630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.200773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.200801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.200902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.200943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.201070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.201098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.201207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.201234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.201323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.201350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.201517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.201582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.201750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.201787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.201951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.201981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.202071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.202100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.134 qpair failed and we were unable to recover it. 00:37:20.134 [2024-11-05 12:51:49.202182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.134 [2024-11-05 12:51:49.202209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.202371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.202422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.202641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.202694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.202772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.202799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.202948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.202976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.203095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.203123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.203237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.203266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.203405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.203494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.203635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.203662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.203774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.203806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.203925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.203953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.204098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.204269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.204405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.204545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.204685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.204851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.204998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.205836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.205997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.206148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.206295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.206465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.206665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.206842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.206970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.206997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.207214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.207256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.207410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.207460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.207578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.207606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.207720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.207749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.135 [2024-11-05 12:51:49.207892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.135 [2024-11-05 12:51:49.207921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.135 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.208939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.208966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.209939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.209972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.210119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.210146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.210228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.210255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.210437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.210489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.210628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.210655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.210764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.210897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.210937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.211046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.211085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.211208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.211258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.211488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.211540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.211706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.211755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.211874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.211902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.211984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.212011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.212210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.212305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.212390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.212417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.212587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.212641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.212727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.212754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.212836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.136 [2024-11-05 12:51:49.212870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.136 qpair failed and we were unable to recover it. 00:37:20.136 [2024-11-05 12:51:49.213011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.213899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.213993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.214021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.214165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.214213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.214383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.214437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.214667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.214718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.214803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.214831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.214955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.214983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.215960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.215987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.216954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.216981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.217062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.217090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.137 [2024-11-05 12:51:49.217170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.137 [2024-11-05 12:51:49.217197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.137 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.217414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.217465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.217647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.217684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.217854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.217890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.218939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.218968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.219935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.219962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.220079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.220106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.220189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.220215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.220353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.220402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.220584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.220649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.220809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.220835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.220974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.221015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.221194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.221251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.221508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.221575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.221785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.221824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.221989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.222018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.222135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.222162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.222303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.222353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.222546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.222580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.222721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.222784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.223029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.138 [2024-11-05 12:51:49.223069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.138 qpair failed and we were unable to recover it. 00:37:20.138 [2024-11-05 12:51:49.223189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.223217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.223311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.223362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.223550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.223795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.223830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.223985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.224133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.224300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.224443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.224583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.224796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.224962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.224989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.225098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.225126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.225239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.225266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.225386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.225412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.225706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.225772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.226009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.226043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.226164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.226212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.226481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.226516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.226689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.226724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.226874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.226902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.226994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.227022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.227136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.227163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.227307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.227356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.227509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.227546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.227829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.227899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.228919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.228946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.229062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.229090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.229188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.139 [2024-11-05 12:51:49.229228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.139 qpair failed and we were unable to recover it. 00:37:20.139 [2024-11-05 12:51:49.229374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.229549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.229596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.229684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.229712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.229828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.229854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.229967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.229994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.230921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.230949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.231038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.231066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.231185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.231212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.231362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.231417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.231593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.231657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.231800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.231828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.231952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.231980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.232065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.232091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.232168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.232194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.232369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.232645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.232708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.232831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.232867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.232952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.232980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.233067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.233094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.233347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.233397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.233536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.233594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.233712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.233740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.233864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.233893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.233986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.234012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.234180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.234229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.234420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.140 [2024-11-05 12:51:49.234478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.140 qpair failed and we were unable to recover it. 00:37:20.140 [2024-11-05 12:51:49.234660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.234717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.234831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.234856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.234974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.235025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.235142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.235192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.235396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.235454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.235558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.235591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.235691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.235718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.235830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.235856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.236919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.236949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.237063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.237091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.237177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.237210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.237383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.237436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.237614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.237674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.237792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.237820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.237972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.238004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.238092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.238137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.238342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.238382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.238664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.238731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.238971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.238999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.239087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.239114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.239231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.239515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.239542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.239796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.239824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.239979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.240006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.240175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.240213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.240427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.240465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.141 [2024-11-05 12:51:49.240585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.141 [2024-11-05 12:51:49.240634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.141 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.240851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.240887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.241005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.241032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.241151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.241178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.241252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.241279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.241410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.241448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.241666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.241705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.241869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.241897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.242013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.242041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.242127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.242155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.242310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.242348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.242477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.242525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.242687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.242725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.242948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.242976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.243088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.243115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.243233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.243260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.243475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.243542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.243823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.243917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.244034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.244061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.244174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.244202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.244383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.244421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.244600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.244628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.244827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.244917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.245058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.245085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.245189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.245221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.245375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.245444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.245728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.245793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.246026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.246121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.142 [2024-11-05 12:51:49.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.142 qpair failed and we were unable to recover it. 00:37:20.142 [2024-11-05 12:51:49.246291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.246318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.246599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.246664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.246912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.246940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.247028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.247055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.247135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.247162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.247272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.247299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.247540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.247607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.247760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.247788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.247896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.247924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.248044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.248072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.248162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.248190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.248334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.248369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.248487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.248553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.248838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.248919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.249032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.249059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.249173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.249201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.249363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.249431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.249690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.249754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.249933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.249960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.250103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.250131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.250268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.250302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.250546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.250611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.250802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.250842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.250976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.251950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.251991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.252189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.252259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.143 [2024-11-05 12:51:49.252518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.143 [2024-11-05 12:51:49.252553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.143 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.252721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.252755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.252919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.252947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.253116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.253376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.253555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.253603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.253913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.253940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.254054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.254191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.254217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.254330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.254375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.254507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.254541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.254793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.255054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.255080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.255234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.255268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.255404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.255439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.255613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.255647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.255948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.255976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.256076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.256117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.256283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.256338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.256447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.256516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.256660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.256688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.256777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.256805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.256948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.256988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.257135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.257163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.257306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.257333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.257530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.257596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.257889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.257949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.258059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.258085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.258271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.258336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.258525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.258592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.258829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.258867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.259011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.259037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.259154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.259200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.259345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.259396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.144 [2024-11-05 12:51:49.259630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.144 [2024-11-05 12:51:49.259696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.144 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.259929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.259957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.260040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.260067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.260237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.260302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.260538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.260603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.260884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.260911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.260993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.261021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.261134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.261161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.261363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.261390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.261665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.261730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.261963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.261991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.262126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.262183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.262422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.262487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.262730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.262795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.262991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.263018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.263131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.263158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.263240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.263267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.263505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.263569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.263795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.263822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.263936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.263964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.264105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.264157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.264376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.264441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.264724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.264789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.265026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.265053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.265148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.265175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.265292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.265318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.265492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.265557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.265851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.265934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.266030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.266057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.266166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.266193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.266288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.266314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.266428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.266455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.266593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.266658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.266877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.266905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.267020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.267046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.267131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.267191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.267367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.267394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.267525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.267552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.145 qpair failed and we were unable to recover it. 00:37:20.145 [2024-11-05 12:51:49.267706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.145 [2024-11-05 12:51:49.267774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.268020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.268087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.268336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.268401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.268647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.268713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.268983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.269051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.269340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.269405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.269664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.269726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.270016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.270049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.270194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.270444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.270502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.270670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.270703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.270993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.271026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.271204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.271280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.271521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.271582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.271879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.271943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.272106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.272209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.272239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.272372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.272408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.272526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.272560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.272702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.272734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.272885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.272918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.273068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.273103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.273355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.273415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.273662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.273694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.273797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.273853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.274072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.274150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.274401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.274435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.274596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.274664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.274943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.275020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.277241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.277276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.277437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.277468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.277562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.277590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.277714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.277749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.277842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.277893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.278021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.278051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.146 [2024-11-05 12:51:49.278173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.146 [2024-11-05 12:51:49.278202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.146 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.278323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.278357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.278443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.278470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.278603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.278632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.278764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.278795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.278945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.278976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.279190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.279240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.279382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.279431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.279535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.279563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.279664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.279692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.279794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.279823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.279961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.279997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.280207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.280271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.280403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.280433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.280532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.280561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.280662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.280690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.280786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.280816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.280938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.280967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.281087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.281115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.281235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.281265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.281393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.281423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.281521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.281549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.281679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.281709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.281831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.281867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.282037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.282178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.282318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.282498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.282677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.282829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.282966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.283001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.283084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.283111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.147 [2024-11-05 12:51:49.283237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.147 [2024-11-05 12:51:49.283281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.147 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.283433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.283464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.283577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.283604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.283758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.283788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.283883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.283912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.284087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.284136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.284261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.284308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.284434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.284463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.284589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.284617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.284711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.284741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.284889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.284918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.285055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.285104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.285241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.285270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.285392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.285428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.285550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.285579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.285675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.285703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.285807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.286851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.286890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.287008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.287036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.287248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.287307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.287485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.287699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.287733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.289001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.289037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.289256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.289325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.289514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.289549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.289689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.289718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.289870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.289900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.290080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.290252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.290401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.290544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.290697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.290829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.148 qpair failed and we were unable to recover it. 00:37:20.148 [2024-11-05 12:51:49.290972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.148 [2024-11-05 12:51:49.291002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.291127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.291154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.291246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.291279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.291375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.291402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.292172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.292207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.292339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.292365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.292451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.292477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.292633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.292662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.292753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.292781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.292895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.292924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.293890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.293918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.294846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.294890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.295008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.295037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.295122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.295156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.295252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.295309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.295483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.295514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.295722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.295753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.295914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.295946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.296084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.296115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.296265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.296315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.296504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.296571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.296765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.296798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.296964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.149 [2024-11-05 12:51:49.296992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.149 qpair failed and we were unable to recover it. 00:37:20.149 [2024-11-05 12:51:49.297125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.297158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.297278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.297334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.297490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.297522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.297671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.297702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.297835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.297874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.298964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.298991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.299075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.299103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.299189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.299216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.299357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.299388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.299548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.299579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.299698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.299730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.299943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.299971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.300068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.300100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.300214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.300241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.300397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.300427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.300559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.300589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.300726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.300756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.300852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.300890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.301010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.301038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.301151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.301178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.301273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.301300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.301414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.301447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.301656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.301688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.301851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.301915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.302948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.150 [2024-11-05 12:51:49.302977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.150 qpair failed and we were unable to recover it. 00:37:20.150 [2024-11-05 12:51:49.303064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.303092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.303239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.303266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.303361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.303388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.303516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.303547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.303691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.303722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.303849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.303887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.304020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.304047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.304172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.304200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.304370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.304428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.304580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.304740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.304771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.304917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.304945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.305059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.305227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.305348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.305505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.305661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.305825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.305976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.306119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.306299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.306464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.306624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.306754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.306904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.306932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.307073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.307100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.307244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.307276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.307397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.307427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.307527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.307557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.307691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.307721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.307854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.307888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.308041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.308226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.308361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.308554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.308744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.308911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.308996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.309023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.309117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.309144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.309240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.309284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.151 [2024-11-05 12:51:49.309442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.151 [2024-11-05 12:51:49.309474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.151 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.309593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.309625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.309729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.309760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.309910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.309938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.310965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.310993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.311143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.311171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.311309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.311338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.311432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.311459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.311549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.311577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.311693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.311720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.311836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.311870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.312032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.312059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.312191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.312220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.312372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.312401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.312559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.312587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.312709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.312736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.312887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.312918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.313034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.313060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.313140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.313166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.313289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.313316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.313456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.313491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.313712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.313742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.313846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.313905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.314024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.314051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.314180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.314208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.314334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.314362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.314455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.152 [2024-11-05 12:51:49.314500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.152 qpair failed and we were unable to recover it. 00:37:20.152 [2024-11-05 12:51:49.314625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.314655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.314780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.314815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.314950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.314978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.315135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.315163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.315244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.315273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.315406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.315435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.315577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.315606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.315727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.315757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.315891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.316919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.316947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.317064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.317091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.317222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.317251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.317388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.317417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.317573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.317602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.317694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.317723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.317883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.317926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.318964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.318993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.319081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.319109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.319253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.319280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.319420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.319447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.319592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.319633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.319764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.319930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.319957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.320075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.320105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.320239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.320266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.320353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.320392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.320536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.320564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.153 [2024-11-05 12:51:49.320666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.153 [2024-11-05 12:51:49.320696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.153 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.320812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.320842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.320983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.321137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.321315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.321471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.321630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.321765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.321923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.321951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.322065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.322093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.322205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.322243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.322373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.322400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.154 [2024-11-05 12:51:49.322505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.154 [2024-11-05 12:51:49.322534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.154 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.322685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.322715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.322850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.322886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.323923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.323967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.324941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.324968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.325924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.325950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.326903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.326933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.327039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.327174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.327362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.327487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.327668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.327828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.327991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1306630 is same with the state(6) to be set 00:37:20.429 [2024-11-05 12:51:49.328188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.328253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.329079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.329111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.329216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.329246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.329393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.429 [2024-11-05 12:51:49.329420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.429 qpair failed and we were unable to recover it. 00:37:20.429 [2024-11-05 12:51:49.329516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.329543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.329635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.329661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.329753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.329780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.329919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.329958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.330914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.330996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.331105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.331252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.331421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.331566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.331715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.331875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.331903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.332900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.332988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.333936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.333963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.334898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.334926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.335938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.336063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.336089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.336211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.336242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.336324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.336353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.336458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.336484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.336629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.336737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.336764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.337885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.337917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.338926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.338953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.430 [2024-11-05 12:51:49.339957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.430 [2024-11-05 12:51:49.339996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.430 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.340942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.340971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.341825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.341871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.342870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.342989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.343105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.343282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.343441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.343620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.343768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.343913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.343941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.344965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.344992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.345953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.345979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.346904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.346936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.347839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.347883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.348001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.348028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.348137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.348163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.348285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.348311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.348395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.431 [2024-11-05 12:51:49.348422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.431 qpair failed and we were unable to recover it. 00:37:20.431 [2024-11-05 12:51:49.348506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.348533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.348649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.348674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.348790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.348817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.349764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.349792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.349981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.350136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.350322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.350469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.350586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.350723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.350843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.350888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.351907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.351934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.352722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.352767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.352939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.352967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.353118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.353261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.353389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.353534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.353709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.353851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.353977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.354957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.354984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.355930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.355958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.356901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.356928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.357913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.357999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.358882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.358908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.359867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.359979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.432 [2024-11-05 12:51:49.360006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.432 qpair failed and we were unable to recover it. 00:37:20.432 [2024-11-05 12:51:49.360147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.360279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.360397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.360526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.360649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.360792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.360931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.360958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.361877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.361987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.362931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.362957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.363958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.363985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.364912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.364939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.365915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.365941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.366869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.366989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.367941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.367966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.368849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.368926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.369017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.369043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.369127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.369153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.369276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.369301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.369415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.369440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.369525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.369551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.433 qpair failed and we were unable to recover it. 00:37:20.433 [2024-11-05 12:51:49.369659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.433 [2024-11-05 12:51:49.369685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.369796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.369821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.369915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.369941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.370936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.371920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.371953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.372079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.372174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.372200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.372940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.372971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.373892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.373978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.374088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.374257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.374396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.374566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.374712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.374882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.374922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.375901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.375991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.376946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.376972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.377965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.377990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.378918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.378947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.379115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.379244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.379455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.379601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.379748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.379872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.379975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.380002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.380113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.380153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.380281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.434 [2024-11-05 12:51:49.380309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.434 qpair failed and we were unable to recover it. 00:37:20.434 [2024-11-05 12:51:49.380384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.380411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.380493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.380519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.380605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.380632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.380759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.380784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.380881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.380909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.381889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.381917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.382940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.382966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.383889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.383917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.384900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.384984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.385956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.385983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.386942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.386969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.387952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.387979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.388892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.388989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.389103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.389273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.389411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.389528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.389639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.435 [2024-11-05 12:51:49.389756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.435 [2024-11-05 12:51:49.389783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.435 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.389892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.389919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.390919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.390946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.391923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.391950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.392913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.392940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.393941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.393967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.394052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.394078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.394803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.394833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.394980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.395914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.395995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.396890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.396989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.397120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.397298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.397447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.397597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.397725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.397879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.397910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.398973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.436 [2024-11-05 12:51:49.398999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.436 qpair failed and we were unable to recover it. 00:37:20.436 [2024-11-05 12:51:49.399080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.399906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.399996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.400134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.400309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.400451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.400567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.400740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.400851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.400889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.401001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.401027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.401132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.401268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.401295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.401405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.401433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.401552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.401598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.401733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.401759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.402525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.402555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.402762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.402790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.402900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.402928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.403924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.403950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.404970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.404995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.405845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.405961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.406913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.406940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.407025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.407052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.407142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.407169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.407273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.407299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.408941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.408968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.409960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.409988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.410106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.410133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.410219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.437 [2024-11-05 12:51:49.410247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.437 qpair failed and we were unable to recover it. 00:37:20.437 [2024-11-05 12:51:49.410350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.410377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.410496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.410532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.410643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.410669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.410762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.410789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.410898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.410926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.411901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.411995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.412901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.412930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.413854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.413977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.414956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.414982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.415970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.415997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.416133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.416297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.416456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.416607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.416769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.416890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.416986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.417951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.417980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.418970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.418999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.419923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.438 [2024-11-05 12:51:49.419963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.438 qpair failed and we were unable to recover it. 00:37:20.438 [2024-11-05 12:51:49.420063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.420219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.420341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.420452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.420590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.420765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.420887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.420914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.421900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.421996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.422152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.422293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.422484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.422654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.422803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.422932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.422959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.423844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.423981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.424928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.424955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.425931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.425958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.426971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.426998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.427105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.427261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.427399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.427539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.427707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.427891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.427985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.428913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.428993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.429019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.429132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.429159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.439 qpair failed and we were unable to recover it. 00:37:20.439 [2024-11-05 12:51:49.429235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.439 [2024-11-05 12:51:49.429261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.429384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.429414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.429513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.429541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.429689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.429718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.429833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.429866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.429957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.429983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.430073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.430100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.430210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.430259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.430391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.430435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.430545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.430572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.430701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.430740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.430873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.430901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.431919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.431946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.432883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.432911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.433958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.433987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.434967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.434999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.435112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.435220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.435571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.435709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.435878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.435996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.436956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.436996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.437973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.437999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.438134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.438257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.438361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.438483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.438684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.438840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.438984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.439012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.439095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.439122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.439209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.439235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.439382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.440 [2024-11-05 12:51:49.439409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.440 qpair failed and we were unable to recover it. 00:37:20.440 [2024-11-05 12:51:49.439540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.439585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.439684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.439724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.439879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.439909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.440023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.440051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.440228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.440258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.440408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.440440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.440619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.440665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.440767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.440795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.440941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.440970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.441878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.441908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.442884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.442978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.443895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.443923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.444869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.444994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.445946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.445972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.446907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.446934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.447965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.447994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.448109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.448250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.448418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.448554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.448664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.441 [2024-11-05 12:51:49.448775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.441 qpair failed and we were unable to recover it. 00:37:20.441 [2024-11-05 12:51:49.448901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.448928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.449957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.449985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.450970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.450997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.451130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.451271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.451447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.451561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.451680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.451890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.451977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.452945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.452972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.453886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.453912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.454905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.454944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.455967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.456957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.456984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.457096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.457122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.457241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.457267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.457406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.457432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.457524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.457553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.457715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.457762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.457891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.457919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.458962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.458991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.442 [2024-11-05 12:51:49.459087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.442 [2024-11-05 12:51:49.459114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.442 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.459249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.459293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.459372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.459398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.459492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.459521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.459632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.459659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.459751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.459777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.459889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.459916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.460912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.460941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.461912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.461996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.462131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.462256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.462412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.462561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.462734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.462884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.462917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.463900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.463928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.464964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.464990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.465946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.465986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.466900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.466995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.467973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.467999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.443 [2024-11-05 12:51:49.468119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.443 [2024-11-05 12:51:49.468145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.443 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.468255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.468280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.468368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.468393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.468486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.468516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.468650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.468675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.468782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.468807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.468907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.468935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.469926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.469952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.470881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.470908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.471928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.471956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.472068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.472208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.472380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.472568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.472718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.472853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.472975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.473936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.473964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.474888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.474915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.475036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.475064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.475175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.475202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.475339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.475371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.475529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.475560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.475741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.475886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.475912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.476871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.476909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.477836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.477975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.478015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.478112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.444 [2024-11-05 12:51:49.478138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.444 qpair failed and we were unable to recover it. 00:37:20.444 [2024-11-05 12:51:49.478248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.478294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.478403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.478451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.478540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.478567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.478689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.478715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.478827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.478855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.478992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.479030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.479157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.479185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.479357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.479403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.479538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.479589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.479711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.479738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.479853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.479885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.479998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.480895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.480992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.481129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.481298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.481412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.481611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.481786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.481969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.481999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.482086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.482113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.482277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.482322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.482452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.482485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.482675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.482722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.482891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.482930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.483945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.483972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.484947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.485972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.485998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.486079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.486106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.486216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.486242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.486382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.486407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.486580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.486626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.486757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.486783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.486899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.486925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.487039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.487151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.487316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.487483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.487674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.487827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.487985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.488130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.488296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.488457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.488582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.488757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.488901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.488927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.489016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.489042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.489127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.445 [2024-11-05 12:51:49.489153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.445 qpair failed and we were unable to recover it. 00:37:20.445 [2024-11-05 12:51:49.489268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.489311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.489453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.489500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.489631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.489661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.489811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.489851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.489955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.489982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.490120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.490304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.490412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.490548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.490718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.490855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.490978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.491927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.491956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.492895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.492984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.493869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.493979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.494874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.494903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.495044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.495187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.495398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.495592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.495728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.495876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.495991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.496883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.496979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.497939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.497979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.498103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.498130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.498235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.446 [2024-11-05 12:51:49.498261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.446 qpair failed and we were unable to recover it. 00:37:20.446 [2024-11-05 12:51:49.498340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.498366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.498468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.498494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.498567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.498593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.498675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.498702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.498835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.498865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.498955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.498981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.499136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.499163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.499275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.499301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.499439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.499465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.499578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.499604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.499740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.499770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.499910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.499936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.500896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.500922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.501934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.501960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.502877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.502982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.503021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.503122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.503152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.503323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.503370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.503505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.503555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.503679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.503707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.503832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.503884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.503977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.504117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.504274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.504423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.504565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.504748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.504875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.504914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.505857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.505903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.506934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.506969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.507958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.507986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.508104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.508131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.508268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.508294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.508430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.508455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.508564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.508590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.508722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.508763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.447 [2024-11-05 12:51:49.508889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.447 [2024-11-05 12:51:49.508928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.447 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.509968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.509994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.510080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.510106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.510221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.510247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.510396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.510444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.510539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.510583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.510747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.510787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.510884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.510923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.511953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.511980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.512945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.512972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.513111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.513137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.513271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.513316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.513485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.513533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.513619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.513645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.513724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.513749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.513868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.513894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.514956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.514982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.515872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.515987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.516099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.516243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.516421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.516560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.516718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.516920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.516960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.517055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.517085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.517212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.517257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.517470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.517527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.517634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.517660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.517749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.517776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.517890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.517916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.518030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.518119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.518145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.518282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.518326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.518442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.448 [2024-11-05 12:51:49.518468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.448 qpair failed and we were unable to recover it. 00:37:20.448 [2024-11-05 12:51:49.518556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.518582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.518685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.518725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.518873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.518901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.519968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.519994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.520076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.520102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.520190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.520216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.520320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.520351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.520509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.520539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.520743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.520866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.520893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.521899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.521926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.522042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.522229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.522421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.522569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.522720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.522899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.522997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.523163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.523300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.523440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.523586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.523717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.523898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.523926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.524969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.524996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.525970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.525996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.526951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.526977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.527909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.527936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.528102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.528251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.528398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.528511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.528644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.449 [2024-11-05 12:51:49.528752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.449 qpair failed and we were unable to recover it. 00:37:20.449 [2024-11-05 12:51:49.528873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.528901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.528979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.529915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.529943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.530886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.530999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.531911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.531998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.532942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.532968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.533073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.533213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.533348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.533487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.533678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.533879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.533974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.534094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.534261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.534372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.534552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.534731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.534916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.534944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.535057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.535084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.535256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.535305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.535435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.535483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.535656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.535705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.535795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.535820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.535911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.535937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.536025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.536050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.536229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.536259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.536460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.536632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.536682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.536827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.536867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.536961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.536990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.537072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.537098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.537228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.537273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.537449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.537499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.537585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.537612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.537700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.537728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.537831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.537882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.538033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.538060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.538188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.538232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.538356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.538403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.538514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.538540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.538673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.538722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.538854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.538904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.539031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.539059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.539190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.539221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.539371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.539417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.539530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.539576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.539666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.539693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.450 qpair failed and we were unable to recover it. 00:37:20.450 [2024-11-05 12:51:49.539800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.450 [2024-11-05 12:51:49.539840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.539956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.539995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.540095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.540122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.540265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.540311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.540453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.540502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.540627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.540656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.540783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.540809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.540937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.540968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.541954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.541980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.542093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.542119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.542191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.542217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.542353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.542377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.542513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.542537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.542682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.542719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.542878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.542906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.543071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.543198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.543391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.543630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.543765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.543910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.543998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.544023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.544133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.544163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.544410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.544435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.544583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.544628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.544741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.544766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.544876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.544914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.545889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.545927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.546887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.546996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.547966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.547992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.548104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.548136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.548230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.548257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.548366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.548391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.548529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.548567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.548721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.548749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.548885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.548913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.549057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.549083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.549224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.549249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.549333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.549479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.549504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.549584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.451 [2024-11-05 12:51:49.549627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.451 qpair failed and we were unable to recover it. 00:37:20.451 [2024-11-05 12:51:49.549739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.549764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.549922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.549950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.550883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.550909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.551943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.551969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.552942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.552968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.553081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.553218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.553377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.553526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.553667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.553846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.553963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.554146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.554260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.554390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.554570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.554721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.554878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.554916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.555924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.555951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.556040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.556065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.556148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.556193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.556326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.556355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.556476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.556516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.556642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.556671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.556876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.556933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.557886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.557914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.558929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.558955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.559926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.559954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.560072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.452 [2024-11-05 12:51:49.560098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.452 qpair failed and we were unable to recover it. 00:37:20.452 [2024-11-05 12:51:49.560211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.560240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.560353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.560378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.560491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.560517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.560597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.560622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.560742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.560769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.560871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.560898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.561934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.561972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.562068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.562095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.562217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.562246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.562386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.562556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:20.453 [2024-11-05 12:51:49.562585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:20.453 qpair failed and we were unable to recover it. 00:37:20.453 [2024-11-05 12:51:49.562706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.953505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.953798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.953834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.954031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.954064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.954198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.954231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.954402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.954434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.954612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.954639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.954741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.954775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.954898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.954927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.955050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.955077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.955220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.955251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.955394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.955426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.955592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.955623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.955758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.955789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.955907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.955939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.956076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.956121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.956226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.956253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.956421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.956451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.956595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.956626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.956764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.956797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.956938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.956969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.957083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.957114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.957209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.957240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.957373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.957404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.957506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.957539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.957688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.957722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.957878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.957925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.958043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.958072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.958249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.958284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.958406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.958457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.030 [2024-11-05 12:51:49.958606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.030 [2024-11-05 12:51:49.958638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.030 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.958788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.958820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.958991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.959123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.959347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.959513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.959641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.959804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.959963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.959995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.960095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.960127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.960296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.960328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.960462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.960495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.960635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.960667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.960854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.960890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.960992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.961021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.961140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.961168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.961323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.961351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.961500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.961538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.961675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.961720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.961845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.961881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.962024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.962060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.962231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.962265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.962413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.962442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.962599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.962754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.962788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.962955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.962989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.963140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.963175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.963311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.963345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.963454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.963490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.963638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.963674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.963822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.964029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.964074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.964225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.964255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.964444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.964480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.964602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.964637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.964797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.964831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.964989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.965023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.965197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.965232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.965388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.031 [2024-11-05 12:51:49.965421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.031 qpair failed and we were unable to recover it. 00:37:21.031 [2024-11-05 12:51:49.965554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.965587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.965760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.965790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.965932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.965962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.966109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.966156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.966289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.966324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.966435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.966470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.966646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.966681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.966871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.966900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.967006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.967034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.967164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.967199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.967371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.967407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.967515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.967566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.967709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.967741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.967893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.967929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.968048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.968083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.968198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.968232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.968425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.968459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.968565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.968598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.968738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.968789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.968941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.968969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.969117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.969150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.969319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.969369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.969513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.969561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.969649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.969677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.969815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.969848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.969966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.969999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.970131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.970165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.970335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.970369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.970483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.970517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.970707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.970740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.970912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.970945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.971071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.971106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.971259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.971294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.971414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.971448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.971587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.971622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.971768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.971802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.971956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.971990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.972133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.972182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.032 qpair failed and we were unable to recover it. 00:37:21.032 [2024-11-05 12:51:49.972351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.032 [2024-11-05 12:51:49.972384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.972547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.972580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.972757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.972790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.972934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.972968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.973120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.973153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.973290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.973322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.973447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.973482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.973666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.973731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.973996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.974052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.974264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.974298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.974437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.974471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.974685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.974719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.974847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.974886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.975050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.975084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.975283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.975337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.975545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.975599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.975845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.975929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.976148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.976206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.976415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.976449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.976565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.976598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.976776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.976814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.976955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.976989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.977138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.977172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.977340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.977374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.977584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.977639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.977848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.977915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.978914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.978944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.979029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.979057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.979191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.979220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.979372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.979405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.979614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.979646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.979813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.033 [2024-11-05 12:51:49.979845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.033 qpair failed and we were unable to recover it. 00:37:21.033 [2024-11-05 12:51:49.980107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.980161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.980327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.980382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.980601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.980655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.980899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.980953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.981192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.981226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.981390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.981423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.981522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.981554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.981775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.981830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.982022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.982078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.982378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.982430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.982584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.982746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.982775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.982877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.982907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.983101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.983156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.983326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.983383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.983600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.983657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.983809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.983885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.984100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.984129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.984230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.984259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.984354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.984383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.984558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.984613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.984865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.984894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.985011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.985039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.985136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.985165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.985293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.985322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.985499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.985553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.985767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.985822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.986073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.986107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.986286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.986539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.986594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.986820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.986891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.987133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.987166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.034 [2024-11-05 12:51:49.987300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.034 [2024-11-05 12:51:49.987334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.034 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.987501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.987534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.987766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.987799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.987931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.987960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.988062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.988095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.988247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.988299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.988426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.988454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.988624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.988661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.988806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.988840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.989132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.989186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.989399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.989432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.989571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.989630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.989843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.989912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.990009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.990037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.990205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.990234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.990328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.990358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.990484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.990512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.990662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.990697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.990853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.990923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.991178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.991233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.991468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.991501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.991595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.991628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.991732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.991766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.992003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.992038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.992176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.992209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.992382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.992415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.992629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.992686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.992939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.992996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.993194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.993227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.993394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.993427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.993574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.993607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.993882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.994075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.994130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.994376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.994431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.994667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.994700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.994871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.994933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.995121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.995175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.995388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.995451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.995687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.035 [2024-11-05 12:51:49.995716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.035 qpair failed and we were unable to recover it. 00:37:21.035 [2024-11-05 12:51:49.995838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.995882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.996037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.996070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.996182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.996215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.996353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.996410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.996657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.996690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.996812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.996845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.997018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.997052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.997189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.997222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.997390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.997434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.997555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.997583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.997707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.997736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.997835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.997871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.998886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.998943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.999147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.999176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.999268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.999297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.999424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.999452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.999584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.999639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:49.999885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:49.999942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.000175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.000209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.000354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.000387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.000647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.000878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.000935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.001151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.001184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.001313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.001345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.001587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.001641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.001857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.001928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.002131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.002164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.002285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.002319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.002500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.002556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.002809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.002880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.003098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.003154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.003340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.003393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.003608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.036 [2024-11-05 12:51:50.003664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.036 qpair failed and we were unable to recover it. 00:37:21.036 [2024-11-05 12:51:50.003844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.003914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.004123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.004156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.004332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.004361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.004483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.004511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.004616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.004644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.004785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.004841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.005090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.005145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.005408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.005441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.005595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.005629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.005754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.005824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.006011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.006065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.006248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.006311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.006558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.006617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.006851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.006918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.007141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.007202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.007364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.007417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.007647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.007926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.007981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.008175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.008226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.008475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.008528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.008718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.008752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.008915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.008955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.009122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.009155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.009356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.009385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.009483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.009512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.009611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.009640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.009763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.009814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.010056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.010234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.010286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.010456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.010506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.010720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.010772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.010997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.011065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.011276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.011342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.011580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.011656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.011876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.011921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.012083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.012135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.012723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.012766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.012886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.012917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.037 [2024-11-05 12:51:50.013073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.037 [2024-11-05 12:51:50.013108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.037 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.013241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.013269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.013405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.013452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.013576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.013604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.013725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.013754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.013840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.013875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.013998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.014027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.014157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.014186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.014305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.014333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.014491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.014520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.014668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.014729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.014933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.014988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.015198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.015251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.015443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.015494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.015705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.015757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.015950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.016004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.016196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.016248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.016452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.016504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.016674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.016724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.016919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.016973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.017193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.017245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.017454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.017504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.017690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.017741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.017950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.018003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.018195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.018224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.018352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.018381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.018510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.018564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.018800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.018851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.019075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.019127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.019344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.019396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.019575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.019626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.019855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.019915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.020114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.020166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.020402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.020453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.020681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.020733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.020937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.020967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.021060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.021089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.021212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.021245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.021347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.021376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.038 [2024-11-05 12:51:50.021506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.038 [2024-11-05 12:51:50.021534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.038 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.021634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.021663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.021904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.021957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.022164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.022215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.022412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.022464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.022643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.022695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.022903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.022956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.023195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.023247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.023489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.023540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.023775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.023827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.024050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.024102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.024279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.024307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.024398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.024428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.024543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.024572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.024759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.024809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.024998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.025051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.025219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.025270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.025469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.025520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.025760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.025788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.025900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.025930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.026967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.026996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.027118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.027148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.027327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.027379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.027550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.027602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.027782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.027834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.028049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.028100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.028336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.028388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.039 [2024-11-05 12:51:50.028540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.039 [2024-11-05 12:51:50.028591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.039 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.028745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.028797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.029026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.029079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.029281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.029332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.029501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.029554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.029764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.029833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.030083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.030135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.030311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.030362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.030577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.030628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.030877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.030955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.031194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.031246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.031407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.031458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.031626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.031680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.031848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.031915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.032096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.032147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.032321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.032372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.032539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.032591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.032829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.032908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.033108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.033159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.033353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.033405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.033605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.033656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.033880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.033934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.034124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.034185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.034431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.034483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.034688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.034739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.034974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.035027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.035241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.035293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.035519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.035570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.035802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.035854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.036043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.036095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.036270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.036320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.036514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.036566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.036754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.036818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.037039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.037094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.037333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.037388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.037637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.037694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.037918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.037973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.038185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.038239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.038427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.038481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.040 [2024-11-05 12:51:50.038713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.040 [2024-11-05 12:51:50.038777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.040 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.038995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.039052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.039233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.039290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.039542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.039598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.039879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.039935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.040169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.040224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.040480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.040535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.040788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.040845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.041137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.041193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.041387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.041443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.041674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.041746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.041983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.042053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.042271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.042646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.042721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.042906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.042963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.043137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.043194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.043415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.043470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.043652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.043707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.043882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.043938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.044115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.044172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.044424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.044489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.044742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.044809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.045028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.045085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.045299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.045354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.045578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.045634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.045890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.045966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.046234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.046289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.046554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.046610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.046885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.046941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.047122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.047420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.047477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.047654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.047709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.047933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.047990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.048150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.048208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.048478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.048545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.048816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.048901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.049073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.049129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.049353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.049408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.049571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.041 [2024-11-05 12:51:50.049626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.041 qpair failed and we were unable to recover it. 00:37:21.041 [2024-11-05 12:51:50.049847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.049916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.050165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.050220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.050469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.050524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.050731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.050785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.051030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.051086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.051302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.051357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.051538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.051592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.051840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.051916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.052165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.052224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.052469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.052528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.052800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.052895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.053108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.053167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.053434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.053492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.053696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.053755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.053985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.054045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.054222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.054280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.054544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.054603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.054874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.054955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.055220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.055278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.055467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.055527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.055770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.055831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.056111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.056171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.056450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.056510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.056735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.056796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.057017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.057078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.057339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.057399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.057639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.057699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.057942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.058002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.058288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.058348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.058573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.058633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.058857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.058928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.059127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.059186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.059424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.059484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.059758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.059817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.060072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.060132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.060361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.060419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.060670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.060729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.060909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.060970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.061265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.042 [2024-11-05 12:51:50.061327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.042 qpair failed and we were unable to recover it. 00:37:21.042 [2024-11-05 12:51:50.061572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.061632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.061836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.062135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.062195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.062416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.062472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.062746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.062803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.063021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.063079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.063319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.063383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.063662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.063727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.063937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.063998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.064196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.064277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.064548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.064936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.065002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.065269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.065333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.065576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.065636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.065814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.065887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.066076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.066157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.066435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.066499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.066749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.066807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.067085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.067151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.067428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.067493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.067737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.067795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.068096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.068163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.068383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.068447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.068735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.068794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.069101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.069167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.069450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.069515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.069803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.069882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.070117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.070198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.070462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.070526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.070735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.070798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.071042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.071103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.071373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.071452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.071712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.071771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.071996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.043 [2024-11-05 12:51:50.072057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.043 qpair failed and we were unable to recover it. 00:37:21.043 [2024-11-05 12:51:50.072248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.072309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.072533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.072593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.072811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.072905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.073175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.073250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.073570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.073634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.073926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.073989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.074218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.074277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.074481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.074540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.074794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.074854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.075074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.075133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.075357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.075416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.075648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.075708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.075939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.075995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.076177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.076232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.076420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.076473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.076682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.076737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.076963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.077019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.077243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.077299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.077514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.077570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.077790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.077845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.078046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.078102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.078314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.078370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.078586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.078644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.078852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.078920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.079150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.079205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.079412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.079468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.079683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.079738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.079938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.079994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.080187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.080242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.080490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.080544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.080726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.080790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.081013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.081066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.081298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.081350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.081492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.081544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.081714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.081765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.081973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.082025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.082233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.082285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.082491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.082542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.082708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.044 [2024-11-05 12:51:50.082759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.044 qpair failed and we were unable to recover it. 00:37:21.044 [2024-11-05 12:51:50.082926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.082979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.083186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.083239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.083434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.083485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.083665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.083717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.083893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.083964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.084200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.084461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.084750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.084801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.085091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.085143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.085332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.085380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.085541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.085589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.085760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.085808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.085973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.086024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.086180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.086231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.086401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.086465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.086747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.086796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.087026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.087078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.087303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.087368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.087610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.087688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.087954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.088021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.088221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.088288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.088527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.088592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.088844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.088933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.089144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.089196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.089438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.089503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.089746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.089810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.090105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.090176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.090408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.090468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.090732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.090798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.091073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.091140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.091384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.091449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.091694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.091757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.092030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.092096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.092393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.092458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.092714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.092780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.093064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.093130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.093385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.093451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.093689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.045 [2024-11-05 12:51:50.093752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.045 qpair failed and we were unable to recover it. 00:37:21.045 [2024-11-05 12:51:50.094052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.094118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.094359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.094424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.094674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.094737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.094977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.095043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.095315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.095380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.095623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.095687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.095981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.096046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.096346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.096412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.096654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.096717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.096998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.097062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.097320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.097387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.097632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.097696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.097948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.098014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.098295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.098360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.098556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.098623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.098888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.098953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.099191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.099256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.099556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.099621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.099918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.100245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.100310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.100597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.100673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.100928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.101020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.101292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.101357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.101604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.101669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.101915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.101979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.102215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.102279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.102461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.102526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.102767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.102833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.103139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.103204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.103459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.103523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.103731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.103795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.104058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.104123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.104374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.104437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.104662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.104727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.105025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.105091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.105360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.105424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.105642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.105706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.106005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.046 [2024-11-05 12:51:50.106071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.046 qpair failed and we were unable to recover it. 00:37:21.046 [2024-11-05 12:51:50.106357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.106421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.106706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.106770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.107100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.107348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.107412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.107668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.107732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.107993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.108058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.108302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.108366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.108615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.108679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.108940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.109005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.109269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.109336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.109537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.109603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.109889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.109954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.110200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.110264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.110507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.110571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.110935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.111246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.111532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.111599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.111895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.111960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.112210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.112275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.112517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.112582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.112911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.112977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.113271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.113336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.113577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.113653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.113907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.113976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.114276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.114341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.114625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.114690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.114950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.115016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.115252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.115316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.115564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.115629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.115892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.115958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.116245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.116309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.116557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.116622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.116910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.116978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.117178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.117245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.117493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.117558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.117768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.117833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.118079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.118147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.118357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.118424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.047 [2024-11-05 12:51:50.118667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.047 [2024-11-05 12:51:50.118733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.047 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.119037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.119103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.119397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.119463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.119720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.119784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.120015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.120090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.120328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.120394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.120645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.120710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.120951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.121016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.121363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.121550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.121615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.121878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.121944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.122242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.122307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.122595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.122659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.122959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.123025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.123309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.123374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.123614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.123678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.123938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.124004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.124254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.124318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.124612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.124676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.124921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.124987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.125236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.125301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.125589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.125654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.125889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.125954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.126244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.126308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.126600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.126675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.126894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.126959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.127210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.127274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.127554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.127619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.127909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.127974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.128267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.128332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.128620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.128685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.128934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.128999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.129249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.129314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.129601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.129666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.048 [2024-11-05 12:51:50.129912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.048 [2024-11-05 12:51:50.129979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.048 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.130222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.130287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.130531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.130597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.130887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.130953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.131170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.131236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.131479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.131543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.131799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.131876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.132134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.132199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.132461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.132525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.132823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.132912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.133180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.133244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.133506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.133571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.133823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.133905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.134117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.134183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.134446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.134512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.134773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.134839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.135065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.135129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.135395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.135460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.135713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.135777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.136059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.136124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.136410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.136474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.136692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.136759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.137016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.137083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.137341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.137405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.137710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.137773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.138006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.138071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.138326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.138390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.138686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.138749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.138951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.139017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.139293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.139493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.139567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.139873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.139939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.140160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.140226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.140466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.140530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.140773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.140837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.141114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.141181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.141433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.141497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.141704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.141768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.049 [2024-11-05 12:51:50.142071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.049 [2024-11-05 12:51:50.142137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.049 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.142376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.142440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.142741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.142805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.143110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.143175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.143390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.143454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.143705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.143769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.144087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.144154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.144352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.144418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.144708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.144774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.145091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.145158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.145413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.145477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.145766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.145831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.146104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.146169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.146415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.146481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.146733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.146797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.147098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.147166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.147417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.147482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.147668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.147733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.147977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.148044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.148295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.148397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.148668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.148739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.149068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.149138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.149333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.149398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.149599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.149665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.149934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.150008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.150256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.150323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.150613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.150679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.150880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.150971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.151281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.151346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.151616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.151681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.151942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.152009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.152239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.152305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.152549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.152613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.152854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.152937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.153201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.153268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.153472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.153537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.153798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.153891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.154150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.154238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.154504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.050 [2024-11-05 12:51:50.154571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.050 qpair failed and we were unable to recover it. 00:37:21.050 [2024-11-05 12:51:50.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.154945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.155184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.155248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.155535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.155599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.155886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.155955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.156209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.156281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.156559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.156625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.156884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.156952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.157170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.157236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.157461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.157529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.157774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.157839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.158164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.158230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.158527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.158600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.158852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.158943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.159211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.159277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.159512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.159579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.159830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.159917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.160192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.160257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.160551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.160617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.160879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.160948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.161205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.161273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.161525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.161603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.161904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.161987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.162285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.162351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.162645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.162711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.162936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.163003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.163286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.163351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.163607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.163674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.163894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.163963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.164176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.164244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.164465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.164530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.164723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.164789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.165089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.165169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.165432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.165499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.165742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.165808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.166131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.166198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.166450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.166514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.166715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.166780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.167088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.167154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.051 qpair failed and we were unable to recover it. 00:37:21.051 [2024-11-05 12:51:50.167383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.051 [2024-11-05 12:51:50.167448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.167695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.167762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.168071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.168139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.168391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.168470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.168736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.168801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.169061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.169127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.169343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.169411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.169605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.169672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.169921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.169989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.170253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.170319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.170612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.170681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.170901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.170969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.171264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.171330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.171639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.171713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.171937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.172005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.172213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.172280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.172568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.172634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.172948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.173017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.173267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.173336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.173575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.173640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.173899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.173971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.174232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.174298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.174500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.174576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.174857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.174945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.175193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.175261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.175512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.175576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.175877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.175944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.176244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.176314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.176611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.176676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.176966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.177033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.177292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.177361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.177586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.177651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.177854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.177944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.178237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.178319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.052 [2024-11-05 12:51:50.178602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.052 [2024-11-05 12:51:50.178668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.052 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.178913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.178980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.179241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.179309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.179568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.179637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.179935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.180004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.180222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.180288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.180486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.180568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.180892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.180959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.181208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.181275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.181529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.181595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.181807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.181888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.182131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.182196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.182440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.182504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.182766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.182835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.183154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.183220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.183428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.183495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.183789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.183855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.184154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.184219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.184507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.184572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.184818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.185224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.185289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.185545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.185611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.185878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.185968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.186235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.186301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.186548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.186615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.186915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.186984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.187233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.187299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.187534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.187599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.187803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.187894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.188166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.188236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.188490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.188557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.188846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.188947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.189234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.189304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.189579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.189645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.189851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.189935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.190227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.190309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.190568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.190633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.190930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.053 [2024-11-05 12:51:50.190997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.053 qpair failed and we were unable to recover it. 00:37:21.053 [2024-11-05 12:51:50.191305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.191390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.191677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.191744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.191995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.192063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.192362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.192427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.192708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.192775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.192991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.193057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.193348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.193413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.193701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.193770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.194029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.194096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.194386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.194452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.194702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.194785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.195067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.195135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.195424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.195490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.195777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.195857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.196130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.196198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.196441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.196508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.196808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.196903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.197207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.197274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.197522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.197594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.197893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.197966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.198241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.198309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.198563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.198628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.198883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.198961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.199210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.199277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.199534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.199599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.199808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.199892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.200146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.200214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.200505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.200812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.201196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.201262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.201509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.201588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.201810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.201894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.202143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.202209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.202500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.202564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.202945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.203158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.203223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.203463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.203544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.203809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.203898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.054 [2024-11-05 12:51:50.204167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.054 [2024-11-05 12:51:50.204232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.054 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.204471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.204536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.204886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.204966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.205193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.205259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.205509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.205573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.205816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.205905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.206139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.206202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.206486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.206551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.206806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.206901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.207172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.207243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.207543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.207608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.207906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.207977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.208176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.208239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.208534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.208599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.208839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.208944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.209234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.209302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.209550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.209615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.209910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.209977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.210250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.210317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.210622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.210689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.210966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.211034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.211315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.211385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.211651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.211717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.211952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.212018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.212304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.212369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.212674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.212738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.212967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.213034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.213277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.213342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.213553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.213628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.213832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.213919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.214171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.214236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.214484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.214549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.214772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.214854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.215122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.215188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.215438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.215506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.215705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.215772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.216100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.216166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.216405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.216470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.216732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.055 [2024-11-05 12:51:50.216818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.055 qpair failed and we were unable to recover it. 00:37:21.055 [2024-11-05 12:51:50.217057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.217123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.217380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.217672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.217736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.218035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.218101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.218353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.218418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.218700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.218764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.219027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.219097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.219402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.219468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.219763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.219827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.220104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.220174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.220437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.220503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.220711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.220776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.221081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.221166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.221474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.221540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.221736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.221800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.222015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.222082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.222377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.222447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.222697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.222764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.223056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.223125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.223675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.223741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.224006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.224075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.224369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.224445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.224719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.224787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.225066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.225135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.225436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.225502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.225720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.225786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.226054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.226122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.226388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.226453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.226662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.226732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.227016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.227085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.227381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.227445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.227700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.227789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.228085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.228165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.228422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.228491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.228749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.228818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.229127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.229192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.229441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.229506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.056 qpair failed and we were unable to recover it. 00:37:21.056 [2024-11-05 12:51:50.229737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.056 [2024-11-05 12:51:50.229802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.230046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.230116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.230362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.230428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.230680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.230745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.231032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.231098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.231393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.231458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.231715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.231782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.232043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.232120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.232422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.232488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.232799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.232878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.233133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.233221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.233495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.233560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.233849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.233933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.234147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.234212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.234443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.234510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.234804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.234886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.235104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.235171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.235466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.235537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.235759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.235824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.236100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.236167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.236408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.236474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.236689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.236754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.237076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.237144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.237341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.237406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.237664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.237740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.237986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.238055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.238340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.238406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.238697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.238779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.239074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.239141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.239365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.239431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.239653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.239717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.239989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.240060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.240314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.240381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.057 [2024-11-05 12:51:50.240630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.057 [2024-11-05 12:51:50.240698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.057 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.240995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.241073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.241375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.241452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.241805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.242086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.242174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.242413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.242479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.242719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.242784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.243048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.243115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.243304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.243369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.243687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.243979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.244046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.244301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.244372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.244622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.244690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.244953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.245023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.245272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.245342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.245679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.245744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.246039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.246107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.246401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.246481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.246789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.246855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.247100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.247166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.247437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.247502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.247762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.247830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.248108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.248175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.248535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.248728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.248813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.249102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.249168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.249478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.249545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.249843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.249929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.250215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.250281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.250503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.250571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.250801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.250902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.251152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.251220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.251512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.251578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.251838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.251947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.252222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.252289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.252544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.252613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.252901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.252969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.253271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.058 [2024-11-05 12:51:50.253567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.058 [2024-11-05 12:51:50.253633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.058 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.253927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.253995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.254242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.254316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.254516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.254582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.254885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.254964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.255223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.255288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.255577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.255647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.255898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.255965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.256238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.256304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.256602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.256671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.256946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.257016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.257295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.257360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.257608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.257693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.257958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.258026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.258219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.258285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.258510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.258574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.258833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.258934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.259164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.259517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.259583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.259842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.259958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.260192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.260259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.260480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.260545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.260768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.260834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.261121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.261188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.059 qpair failed and we were unable to recover it. 00:37:21.059 [2024-11-05 12:51:50.261484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.059 [2024-11-05 12:51:50.261548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.332 qpair failed and we were unable to recover it. 00:37:21.332 [2024-11-05 12:51:50.261752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.332 [2024-11-05 12:51:50.261820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.332 qpair failed and we were unable to recover it. 00:37:21.332 [2024-11-05 12:51:50.262099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.332 [2024-11-05 12:51:50.262168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.332 qpair failed and we were unable to recover it. 00:37:21.332 [2024-11-05 12:51:50.262391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.332 [2024-11-05 12:51:50.262459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.332 qpair failed and we were unable to recover it. 00:37:21.332 [2024-11-05 12:51:50.262714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.332 [2024-11-05 12:51:50.262780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.332 qpair failed and we were unable to recover it. 00:37:21.332 [2024-11-05 12:51:50.263091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.332 [2024-11-05 12:51:50.263162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.332 qpair failed and we were unable to recover it. 00:37:21.332 [2024-11-05 12:51:50.263408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.263474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.263736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.263806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.264088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.264175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.264401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.264685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.264753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.265024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.265092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.265338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.265404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.265691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.265758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.266029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.266098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.266398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.266466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.266768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.266834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.267129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.267193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.267453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.267523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.267731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.267797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.268097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.268179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.268469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.268540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.268824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.268914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.269210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.269276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.269517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.269604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.269851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.269944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.270242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.270309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.270555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.270623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.270936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.271006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.271248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.271313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.271605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.271670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.271905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.271984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.272252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.272317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.272567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.272634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.273007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.273077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.273372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.273437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.273689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.273755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.274050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.274120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.274389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.274454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.274636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.274702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.274911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.274980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.333 [2024-11-05 12:51:50.275196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.333 [2024-11-05 12:51:50.275265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.333 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.275526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.275595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.275843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.275946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.276246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.276323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.276595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.276662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.276931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.276999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.277260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.277336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.277562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.277629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.277924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.277992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.278235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.278301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.278564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.278634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.278934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.279002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.279257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.279322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.279539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.279624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.279855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.279942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.280200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.280267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.280566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.280631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.280969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.281038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.281298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.281364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.281659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.281743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.282007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.282076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.282375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.282442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.282678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.282742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.283000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.283069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.283319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.283385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.283685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.283749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.284041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.284112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.284364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.284431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.284686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.284751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.285076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.285146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.285438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.285504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.285713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.285777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.286013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.286082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.286356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.286425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.286727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.286792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.287064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.287133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.287386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.334 [2024-11-05 12:51:50.287455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.334 qpair failed and we were unable to recover it. 00:37:21.334 [2024-11-05 12:51:50.287713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.287779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.288060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.288127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.288373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.288464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.288737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.288802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.289051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.289120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.289369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.289438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.289736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.289801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.290042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.290111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.290400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.290467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.290720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.290788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.291070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.291137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.291429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.291496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.291742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.291811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.292111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.292179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.292468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.292535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.292830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.292927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.293194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.293262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.293543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.293610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.293910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.294240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.294307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.294527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.294595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.294878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.294948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.295247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.295327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.295571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.295635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.295835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.295927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.296150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.296218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.296469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.296534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.296794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.296896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.297167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.297252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.297498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.297565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.297789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.297853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.298182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.298247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.298513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.298583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.298829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.298917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.299117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.299185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.335 [2024-11-05 12:51:50.299477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.335 [2024-11-05 12:51:50.299559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.335 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.299833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.299923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.300172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.300237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.300521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.300600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.300854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.300956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.301231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.301297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.301543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.301608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.301909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.302244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.302310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.302553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.302617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.302846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.302933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.303206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.303274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.303556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.303621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.303889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.303982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.304257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.304324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.304615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.304679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.304927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.304994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.305249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.305314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.305566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.305632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.305930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.305999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.306305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.306373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.306666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.306732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.306986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.307056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.307318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.307385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.307683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.307748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.308064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.308131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.308398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.308464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.308720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.309137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.309220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.309484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.309553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.309795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.309887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.310125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.310191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.310482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.310549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.310848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.310935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.311250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.311316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.311526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.311592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.311831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.311914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.312153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.312218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.336 qpair failed and we were unable to recover it. 00:37:21.336 [2024-11-05 12:51:50.312464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.336 [2024-11-05 12:51:50.312530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.312915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.313167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.313233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.313536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.313603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.313897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.313964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.314254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.314320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.314570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.314639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.314936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.315003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.315258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.315325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.315625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.315989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.316057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.316311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.316378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.316625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.316694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.316978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.317045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.317281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.317347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.317604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.317672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.317940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.318018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.318281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.318346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.318642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.318707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.318984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.319054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.319359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.319428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.319721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.319787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.320097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.320164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.320457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.320523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.320772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.320837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.321115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.321182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.321487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.321556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.321746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.321813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.322135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.322203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.322462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.322532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.337 [2024-11-05 12:51:50.322752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.337 [2024-11-05 12:51:50.322818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.337 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.323093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.323162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.323408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.323480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.323806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.323895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.324159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.324226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.324440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.324506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.324740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.324809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.325122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.325219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.325521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.325592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.325839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.325931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.326153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.326221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.326520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.326588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.326831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.326910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.327174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.327240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.327531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.327598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.327874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.327953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.328246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.328311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.328565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.328631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.328807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.328887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.329237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.329528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.329594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.329837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.329925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.330136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.330206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.330409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.330475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.330765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.330830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.331093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.331160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.331447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.331527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.331822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.331904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.332197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.332513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.332578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.332822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.332915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.333168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.333235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.333524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.333591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.333882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.333953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.334166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.334526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.334596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.334893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.334961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.335221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.335287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.338 qpair failed and we were unable to recover it. 00:37:21.338 [2024-11-05 12:51:50.335488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.338 [2024-11-05 12:51:50.335557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.335897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.335966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.336285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.336349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.336604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.336851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.336945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.337197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.337262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.337508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.337573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.337846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.337946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.338209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.338276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.338519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.338588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.338892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.338983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.339258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.339325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.339577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.339645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.339912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.339982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.340177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.340242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.340461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.340529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.340768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.340833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.341111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.341180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.341436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.341501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.341720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.341786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.342069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.342143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.342380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.342447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.342715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.342782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.342988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.343057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.343342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.343409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.343611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.343680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.343970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.344039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.344261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.344350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.344662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.344740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.345029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.345096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.345353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.345418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.345666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.345735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.346007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.346074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.346278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.346345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.346553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.346620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.346914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.346982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.339 [2024-11-05 12:51:50.347268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.339 [2024-11-05 12:51:50.347335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.339 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.347590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.347670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.347964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.348034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.348302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.348369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.348675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.348754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.349024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.349092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.349360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.349427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.349682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.349749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.350123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.350192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.350484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.350549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.350764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.350829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.351100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.351166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.351413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.351479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.351699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.351767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.352060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.352131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.352382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.352449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.352651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.352717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.353010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.353092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.353402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.353467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.353775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.353841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.354167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.354509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.354574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.354889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.354956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.355187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.355254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.355514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.355580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.355827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.355915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.356158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.356223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.356499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.356566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.356775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.356843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.357171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.357236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.357492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.357561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.357780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.357845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.358126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.358205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.358450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.358521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.358793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.358908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.359225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.340 [2024-11-05 12:51:50.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.340 [2024-11-05 12:51:50.359585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.340 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.359811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.359906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.360179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.360245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.360532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.360596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.360811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.360902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.361130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.361199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.361463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.361528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.361780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.361851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.362212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.362281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.362577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.362643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.362935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.363223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.363288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.363559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.363624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.363881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.363950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.364263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.364335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.364601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.364669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.364926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.364994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.365202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.365268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.365569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.365634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.365935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.366002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.366238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.366306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.366559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.366625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.366829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.366925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.367173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.367240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.367522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.367591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.367843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.368143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.368210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.368519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.368587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.368892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.368960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.369203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.369269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.369455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.369537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.369821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.369905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.370129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.370196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.370498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.370566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.370784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.370850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.371132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.371197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.371484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.371560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.341 [2024-11-05 12:51:50.371768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.341 [2024-11-05 12:51:50.371836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.341 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.372146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.372215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.372508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.372574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.372892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.372964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.373179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.373248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.373503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.373569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.373754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.373818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.374089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.374157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.374401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.374466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.374756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.374821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.375141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.375211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.375469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.375535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.375718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.375783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.376083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.376171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.376405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.376473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.376717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.376782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.377106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.377175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.377475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.377543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.377786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.377852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.378142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.378208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.378401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.378470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.378728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.378794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.379048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.379115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.379404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.379488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.379784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.379850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.380097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.380166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.380479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.380546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.380808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.380891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.381145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.381210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.381509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.381577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.381890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.381958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.382221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.382286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.342 qpair failed and we were unable to recover it. 00:37:21.342 [2024-11-05 12:51:50.382484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.342 [2024-11-05 12:51:50.382551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.382804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.382893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.383127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.383193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.383380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.383445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.383730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.383795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.384120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.384189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.384474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.384540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.384794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.384901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.385165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.385232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.385476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.385542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.385787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.385852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.386078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.386143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.386345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.386414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.386678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.386743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.386977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.387056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.387364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.387430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.387689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.387757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.388067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.388141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.388426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.388495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.388796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.388880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.389146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.389212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.389547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.389796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.389877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.390168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.390235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.390511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.390581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.390785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.390851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.391125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.391191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.391443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.391509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.391763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.391830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.392160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.392225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.392513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.392581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.392895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.392964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.393169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.393235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.393528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.393618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.393906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.393975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.394167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.394234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.394540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.394604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.394903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.343 [2024-11-05 12:51:50.394972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.343 qpair failed and we were unable to recover it. 00:37:21.343 [2024-11-05 12:51:50.395162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.395227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.395512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.395576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.395894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.395963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.396204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.396272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.396570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.396635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.396920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.396992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.397202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.397268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.397477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.397543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.397790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.397857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.398121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.398200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.398458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.398526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.398813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.398899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.399145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.399214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.399481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.399546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.399838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.399925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.400145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.400218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.400479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.400547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.400841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.400926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.401184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.401270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.401524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.401592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.401838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.401924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.402226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.402292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.402545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.402612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.402922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.402991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.403300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.403373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.403633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.403701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.403947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.404013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.404234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.404302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.404606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.404672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.404884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.404951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.405201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.405266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.405476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.405557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.405874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.405945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.406210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.406276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.406536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.344 [2024-11-05 12:51:50.406605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.344 qpair failed and we were unable to recover it. 00:37:21.344 [2024-11-05 12:51:50.406889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.406959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.407268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.407333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.407570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.407638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.407895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.407965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.408229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.408294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.408592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.408659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.408889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.408979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.409285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.409350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.409612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.409677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.409964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.410031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.410329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.410394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.410645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.410712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.410969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.411053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.411323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.411389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.411646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.411724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.412027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.412101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.412429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.412496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.412714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.412782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.413061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.413132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.413350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.413417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.413658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.413727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.414015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.414082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.414349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.414436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.414705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.414771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.415030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.415097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.415356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.415423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.415693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.415761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.416037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.416106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.416311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.416377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.416626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.416709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.416954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.417023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.417263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.417328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.417539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.417603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.417851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.417938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.418178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.418243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.418528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.418593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.345 [2024-11-05 12:51:50.418896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.345 [2024-11-05 12:51:50.418969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.345 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.419176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.419245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.419538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.419604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.419856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.419954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.420254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.420320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.420624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.420692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.420955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.421024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.421332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.421399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.421686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.421751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.422067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.422136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.422405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.422473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.422679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.422745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.422986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.423053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.423300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.423390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.423648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.423715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.423968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.424035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.424339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.424405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.424682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.424751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.425014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.425399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.425465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.425688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.425756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.426082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.426360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.426428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.426679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.426744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.426975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.427011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.427114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.427148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.427384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.427708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.427786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.428055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.428090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.428285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.428351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.428601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.428666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.428902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.428956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.429113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.429148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.429323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.429388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.429681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.429746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.429990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.430024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.430200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.430234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.346 qpair failed and we were unable to recover it. 00:37:21.346 [2024-11-05 12:51:50.430391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.346 [2024-11-05 12:51:50.430458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.430722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.430789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.431030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.431066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.431183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.431217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.431328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.431362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.431579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.431645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.431840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.431939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.432078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.432112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.432316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.432381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.432670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.432735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.432995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.433031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.433153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.433188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.433351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.433416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.433662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.433727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.433972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.434007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.434122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.434169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.434339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.434404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.434660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.434728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.434947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.434983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.435121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.435171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.435321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.435356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.435581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.435657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.435885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.435940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.436060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.436095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.436238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.436310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.436580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.436644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.436921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.436957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.437176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.437214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.437463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.437534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.437803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.437891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.438069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.438105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.438302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.438369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.438669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.438735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.438982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.347 [2024-11-05 12:51:50.439019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.347 qpair failed and we were unable to recover it. 00:37:21.347 [2024-11-05 12:51:50.439127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.439209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.439511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.439580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.439928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.439965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.440113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.440148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.440304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.440370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.440634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.440692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.440874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.440917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.441102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.441156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.441372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.441438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.441689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.441756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.441976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.442011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.442156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.442234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.442433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.442500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.442793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.442892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.443071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.443105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.443220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.443555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.443622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.443884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.443952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.444105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.444140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.444324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.444392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.444643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.444707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.444946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.444989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.445144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.445179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.445355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.445420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.445619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.445684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.445934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.446228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.446295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.446510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.446588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.446842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.447085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.447376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.447446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.447681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.447746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.447968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.448035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.448300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.448383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.448650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.448715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.449036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.449262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.348 [2024-11-05 12:51:50.449328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.348 qpair failed and we were unable to recover it. 00:37:21.348 [2024-11-05 12:51:50.449633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.449703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.449917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.449984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.450229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.450295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.450589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.450656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.450911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.450978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.451236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.451301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.451591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.451663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.451916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.451983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.452275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.452341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.452629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.452715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.453031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.453099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.453396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.453463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.453764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.453846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.454140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.454207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.454451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.454516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.454766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.454831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.455145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.455215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.455426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.455490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.455749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.455815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.456148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.456214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.456500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.456564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.456828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.456909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.457173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.457241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.457441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.457508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.457797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.457881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.458193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.458267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.458548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.458792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.458896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.459207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.459285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.459551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.459616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.459831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.459916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.460172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.460249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.460558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.460627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.460923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.460991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.461210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.461278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.461534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.461600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.461788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.461855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.349 [2024-11-05 12:51:50.462166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.349 [2024-11-05 12:51:50.462232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.349 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.462493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.462559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.462787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.462873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.463130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.463195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.463443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.463508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.463764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.463832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.464069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.464135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.464380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.464444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.464761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.464826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.465115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.465182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.465434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.465499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.465749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.465830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.466122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.466188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.466494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.466558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.466815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.466912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.467204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.467270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.467470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.467534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.467783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.467847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.468072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.468140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.468389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.468705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.468771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.469120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.469188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.469446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.469514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.469802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.470159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.470244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.470511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.470577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.470894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.470963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.471250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.471314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.471518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.471585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.471829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.471916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.472166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.472231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.472508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.472575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.472891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.472959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.473178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.473243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.473515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.473596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.473891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.473960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.474169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.474235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.474528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.350 [2024-11-05 12:51:50.474600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.350 qpair failed and we were unable to recover it. 00:37:21.350 [2024-11-05 12:51:50.474907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.474976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.475240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.475465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.475529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.475757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.475828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.476212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.476509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.476574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.476880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.476946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.477204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.477271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.477482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.477546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.477782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.477846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.478143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.478210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.478468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.478532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.478776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.478844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.479186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.479256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.479499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.479563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.479814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.479894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.480094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.480159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.480373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.480440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.480703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.480770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.481029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.481096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.481334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.481400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.481692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.481757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.482010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.482077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.482429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.482527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.482815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.482907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.483224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.483290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.483542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.483606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.483877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.483947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.484169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.484235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.484527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.484593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.484801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.484885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.351 [2024-11-05 12:51:50.485176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.351 [2024-11-05 12:51:50.485240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.351 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.485448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.485512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.485804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.485886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.486172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.486235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.486491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.486556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.486791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.486855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.487150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.487215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.487513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.487577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.487891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.487959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.488216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.488281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.488581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.488646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.488879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.488946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.489153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.489219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.489506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.489570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.489886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.489952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.490177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.490241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.490490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.490555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.490754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.490817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.491080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.491145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.491414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.491489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.491758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.491822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.492061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.492126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.492371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.492435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.492685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.492748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.493013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.493080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.493333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.493397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.493686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.493749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.494052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.494335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.494400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.494687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.494750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.495025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.495091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.495328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.495393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.495587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.495651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.495909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.495976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.496166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.496230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.496475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.496538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.496799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.496875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.497167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.352 [2024-11-05 12:51:50.497232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.352 qpair failed and we were unable to recover it. 00:37:21.352 [2024-11-05 12:51:50.497524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.497588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.497827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.497912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.498155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.498220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.498439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.498504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.498751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.498814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.499031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.499097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.499366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.499431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.499684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.499747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.499994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.500072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.500367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.500432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.500676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.500740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.500996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.501062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.501340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.501701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.501764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.502065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.502131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.502385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.502742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.502805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.503079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.503144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.503397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.503461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.503811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.504040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.504105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.504322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.504388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.504630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.504695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.504933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.505001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.505256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.505320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.505570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.505634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.505914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.505981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.506192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.506255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.506503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.506567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.506877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.506943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.507229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.507293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.507553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.507617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.507823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.507907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.508202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.508266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.508487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.508551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.508756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.508821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.509071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.509135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.509359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.353 [2024-11-05 12:51:50.509423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.353 qpair failed and we were unable to recover it. 00:37:21.353 [2024-11-05 12:51:50.509645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.509709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.509964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.510030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.510321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.510385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.510639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.510704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.510895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.510960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.511187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.511251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.511498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.511564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.511853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.511930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.512173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.512238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.512462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.512807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.512883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.513154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.513220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.513444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.513508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.513737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.513802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.514085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.514151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.514417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.514480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.514670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.514734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.514992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.515059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.515359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.515423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.515682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.515746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.515968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.516034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.516280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.516346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.516608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.516672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.516934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.517000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.517242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.517306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.517573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.517639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.517937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.518003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.518195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.518261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.518507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.518571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.518769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.518834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.519151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.519217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.519511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.519575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.519817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.519899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.520185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.520248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.520534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.520598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.520851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.354 [2024-11-05 12:51:50.520931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.354 qpair failed and we were unable to recover it. 00:37:21.354 [2024-11-05 12:51:50.521186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.521252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.521478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.521543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.521843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.521950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.522203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.522267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.522510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.522574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.522838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.522920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.523150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.523213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.523471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.523536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.523790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.523853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.524176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.524240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.524432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.524497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.524756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.524820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.525135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.525200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.525389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.525452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.525726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.525789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.526049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.526115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.526332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.526396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.526683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.526748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.527012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.527078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.527291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.527356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.527577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.527641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.527886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.527952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.528165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.528229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.528482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.528546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.528761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.528826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.529052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.529115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.529355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.529421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.529718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.530063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.530129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.530348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.530422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.530710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.530774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.531078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.531145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.531413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.531477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.531783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.531847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.532177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.532242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.532548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.532612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.355 qpair failed and we were unable to recover it. 00:37:21.355 [2024-11-05 12:51:50.532857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.355 [2024-11-05 12:51:50.532941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.533184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.533249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.533469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.533532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.533780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.533845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.534123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.534188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.534436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.534499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.534698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.534764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.535042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.535108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.535364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.535427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.535692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.535756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.536074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.536140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.536404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.536467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.536682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.536746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.536952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.537019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.537237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.537301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.537521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.537584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.537811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.537910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.538222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.538286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.538579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.538642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.538852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.538935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.539152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.539225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.539484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.539547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.539785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.539848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.540093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.540158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.540443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.540507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.540744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.540807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.541067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.541132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.541417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.541481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.541675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.541738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.541983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.542048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.542312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.542377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.542616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.542679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.356 qpair failed and we were unable to recover it. 00:37:21.356 [2024-11-05 12:51:50.542964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.356 [2024-11-05 12:51:50.543030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.543281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.543345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.543601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.543667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.543912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.543977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.544227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.544291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.544507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.544571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.544872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.544937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.545195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.545260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.545548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.545612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.545896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.545964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.546220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.546285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.546480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.546546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.546834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.546919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.547174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.547238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.547528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.547592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.547888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.547954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.548232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.548297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.548604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.548669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.548948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.549207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.549272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.549523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.549588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.549839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.549920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.550185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.550249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.550509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.550574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.550886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.550952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.551265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.551329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.551579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.551644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.551835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.551916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.552169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.552234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.552539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.552604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.552904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.552970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.553279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.553344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.553547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.553611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.553855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.553951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.554165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.554230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.554452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.554516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.554816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.554894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.555160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.357 [2024-11-05 12:51:50.555224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.357 qpair failed and we were unable to recover it. 00:37:21.357 [2024-11-05 12:51:50.555481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.555546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.555793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.555873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.556089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.556153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.556448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.556514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.556767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.556832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.557078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.557145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.557349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.557415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.557668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.557733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.558026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.558094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.558384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.558448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.558704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.558769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.559024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.559089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.559331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.559395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.559647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.559711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-11-05 12:51:50.559970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.358 [2024-11-05 12:51:50.560035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.560248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.560312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.560534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.560598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.560840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.560918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.561139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.561213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.561469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.561549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.561760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.561824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.562094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.562158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.562382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.562446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.562732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.562796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.563028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.563093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.563351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.563416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.563715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.563779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.564005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.564071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.564303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.564368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.564573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.632 [2024-11-05 12:51:50.564636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.632 qpair failed and we were unable to recover it. 00:37:21.632 [2024-11-05 12:51:50.564929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.564995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.565290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.565355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.565580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.565644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.565848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.565945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.566253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.566317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.566558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.566623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.566891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.566957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.567196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.567261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.567507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.567572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.567821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.567903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.568199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.568262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.568497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.568563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.568856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.568938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.569176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.569239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.569442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.569506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.569731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.569805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.570038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.570104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.570312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.570376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.570659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.570722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.570914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.570981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.571232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.571297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.571518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.571582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.571874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.571941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.572233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.572559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.572623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.572846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.572922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.573126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.573190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.573402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.573466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.573701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.573764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.574010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.574076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.574322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.574386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.633 qpair failed and we were unable to recover it. 00:37:21.633 [2024-11-05 12:51:50.574673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.633 [2024-11-05 12:51:50.574737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.574989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.575054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.575349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.575414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.575599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.575662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.575887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.575952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.576241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.576306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.576563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.576627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.576888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.576952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.577164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.577229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.577477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.577541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.577827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.577922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.578196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.578259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.578534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.578599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.578850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.578934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.579178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.579242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.579448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.579514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.579804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.579884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.580113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.580176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.580461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.580524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.580822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.580904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.581118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.581184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.581474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.581539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.581788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.581854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.582171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.582235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.582531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.582594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.582901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.582980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.583269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.583332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.583535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.583599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.583892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.583958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.584202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.584265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.584542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.584606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.584825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.584902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.585161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.585225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.634 qpair failed and we were unable to recover it. 00:37:21.634 [2024-11-05 12:51:50.585487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.634 [2024-11-05 12:51:50.585550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.585839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.585928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.586217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.586280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.586582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.586886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.586952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.587240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.587304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.587608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.587672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.587884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.587949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.588194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.588258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.588520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.588584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.588889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.588955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.589248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.589313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.589536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.589600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.589891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.589958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.590207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.590272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.590569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.590633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.590826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.590904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.591152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.591215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.591403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.591467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.591806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.592031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.592096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.592394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.592458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.592711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.592774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.593022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.593087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.593297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.593361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.593672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.593923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.593990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.594242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.594306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.594559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.594623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.594879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.594945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.595158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.595222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.635 [2024-11-05 12:51:50.595515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.635 [2024-11-05 12:51:50.595578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.635 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.595876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.595942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.596168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.596231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.596479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.596543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.596765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.596828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.597115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.597178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.597474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.597538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.597791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.597854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.598130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.598193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.598490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.598854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.598942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.599197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.599261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.599482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.599545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.599795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.599877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.600147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.600210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.600447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.600520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.600772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.600836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.601100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.601164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.601417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.601480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.601724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.601787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.602104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.602170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.602461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.602525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.602813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.602910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.603181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.603243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.603549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.603613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.603993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.604301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.604582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.604646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.604892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.604957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.605255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.605318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.605561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.605624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.636 [2024-11-05 12:51:50.605834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.636 [2024-11-05 12:51:50.605910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.636 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.606167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.606230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.606437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.606500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.606744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.606808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.607089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.607154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.607376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.607439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.607730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.607793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.608061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.608125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.608374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.608438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.608640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.608705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.608948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.609014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.609242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.609316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.609579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.609643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.609921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.609988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.610246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.610311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.610557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.610620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.610889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.610954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.611258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.611322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.611575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.611639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.611886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.611950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.612164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.612230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.612508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.612572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.612772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.612836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.613149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.613213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.613464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.613528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.613846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.613950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.614197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.614261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.614448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.614512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.614790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.614854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.615130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.637 [2024-11-05 12:51:50.615194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.637 qpair failed and we were unable to recover it. 00:37:21.637 [2024-11-05 12:51:50.615433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.615497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.615759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.615824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.616056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.616121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.616340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.616403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.616655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.616719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.616959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.617026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.617239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.617304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.617561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.617625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.617886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.617962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.618216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.618280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.618523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.618586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.618837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.618915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.619131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.619195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.619434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.619498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.619677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.619741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.619961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.620026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.620237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.620301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.620491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.620556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.620780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.620845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.621135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.621202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.621466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.621530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.621787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.621851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.622193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.622258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.622464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.622527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.622717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.622782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.623014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.623080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.623321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.623384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.623603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.623667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.638 qpair failed and we were unable to recover it. 00:37:21.638 [2024-11-05 12:51:50.623922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.638 [2024-11-05 12:51:50.623989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.624254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.624317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.624547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.624611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.624872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.624938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.625186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.625249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.625556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.625620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.625925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.625992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.626241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.626305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.626575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.626639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.626858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.626936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.627197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.627260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.627461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.627525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.627773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.627838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.628070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.628136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.628381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.628661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.628723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.628962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.629028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.629317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.629381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.629641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.629704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.629910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.629977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.630222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.630286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.630509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.630582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.630793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.630857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.631135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.631198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.631487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.631551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.631759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.631823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.632059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.632123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.632354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.632705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.632769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.633011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.633077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.639 qpair failed and we were unable to recover it. 00:37:21.639 [2024-11-05 12:51:50.633365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.639 [2024-11-05 12:51:50.633429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.633721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.633785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.634056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.634121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.634307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.634370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.634647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.634711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.635021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.635087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.635329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.635393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.635637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.635701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.635983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.636048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.636351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.636415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.636669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.636985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.637050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.637341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.637405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.637656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.637719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.637989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.638057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.638406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.638661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.638725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.639012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.639078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.639332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.639406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.639680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.639745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.640002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.640068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.640312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.640376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.640665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.640729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.640990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.641055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.641342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.641405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.641697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.641974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.642041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.642254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.642317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.642610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.642674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.642959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.643026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.643246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.643311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.643536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.643601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.643812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.643892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.644138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.644202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.644429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.640 [2024-11-05 12:51:50.644492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.640 qpair failed and we were unable to recover it. 00:37:21.640 [2024-11-05 12:51:50.644744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.644807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.645034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.645099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.645387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.645451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.645690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.645753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.645957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.646023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.646232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.646296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.646507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.646571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.646829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.646917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.647174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.647237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.647524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.647587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.647801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.647882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.648143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.648209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.648426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.648490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.648778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.648841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.649053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.649118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.649372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.649436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.649683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.649746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.649973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.650039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.650254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.650319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.650615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.650678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.650914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.650980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.651240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.651304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.651567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.651631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.651931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.651997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.652254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.652320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.652577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.652640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.652889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.652955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.653207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.653271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.653517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.653582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.653915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.653981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.641 [2024-11-05 12:51:50.654225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.641 [2024-11-05 12:51:50.654289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.641 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.654580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.654644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.654943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.655009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.655232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.655297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.655541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.655604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.655853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.655932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.656219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.656534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.656602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.656906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.656975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.657223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.657289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.657543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.657608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.657906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.657972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.658190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.658255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.658488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.658551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.658854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.658948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.659156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.659222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.659489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.659553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.659881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.659948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.660189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.660254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.660471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.660535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.660731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.660794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.661058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.661134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.661386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.661450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.661717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.661784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.662026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.662091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.662336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.662402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.662688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.662752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.663020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.663085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.663339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.663404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.663682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.663746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.664001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.664067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.664357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.664421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.664722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.642 [2024-11-05 12:51:50.664786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.642 qpair failed and we were unable to recover it. 00:37:21.642 [2024-11-05 12:51:50.665014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.665080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.665300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.665364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.665671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.665735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.665973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.666039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.666348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.666412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.666644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.666708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.666957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.667023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.667311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.667375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.667576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.667642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.667895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.667961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.668211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.668276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.668498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.668562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.668821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.668900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.669146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.669210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.669514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.669578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.669843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.669948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.670242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.670307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.670508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.670817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.670901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.671135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.671199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.671491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.671555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.671798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.671879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.672127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.672191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.672408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.672473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.672738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.672803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.673065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.673132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.673346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.673649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.673712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.673963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.674029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.643 qpair failed and we were unable to recover it. 00:37:21.643 [2024-11-05 12:51:50.674289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.643 [2024-11-05 12:51:50.674353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.674608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.674673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.674965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.675032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.675240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.675304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.675546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.675612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.675830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.675911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.676170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.676234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.676489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.676552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.676858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.676938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.677182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.677246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.677493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.677558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.677772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.677836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.678122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.678186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.678430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.678510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.678717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.678781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.679119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.679373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.679436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.679689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.679752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.680052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.680117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.680372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.680437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.680689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.680753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.681023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.681089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.681376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.681442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.681653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.681716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.681942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.682008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.682287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.682352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.682592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.682655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.682911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.682978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.683229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.644 [2024-11-05 12:51:50.683294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.644 qpair failed and we were unable to recover it. 00:37:21.644 [2024-11-05 12:51:50.683581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.683645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.683895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.683960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.684198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.684263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.684548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.684612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.684926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.684992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.685352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.685611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.685675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.685982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.686048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.686301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.686368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.686650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.687003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.687068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.687278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.687342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.687636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.687701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.687947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.688013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.688302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.688367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.688652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.688715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.688970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.689035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.689336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.689401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.689661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.689724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.689961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.690027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.690226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.690292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.690543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.690607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.690887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.690953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.691175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.691239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.691468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.691532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.691786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.691877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.692148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.692214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.692467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.692532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.692787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.692851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.693164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.693229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.693480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.693544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.693786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.645 [2024-11-05 12:51:50.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.645 qpair failed and we were unable to recover it. 00:37:21.645 [2024-11-05 12:51:50.694136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.694200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.694442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.694505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.694776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.694840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.695128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.695193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.695445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.695509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.695769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.695832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.696061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.696125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.696432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.696498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.696681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.696744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.697009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.697075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.697334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.697398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.697612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.697676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.697921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.697989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.698239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.698305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.698560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.698623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.698889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.698956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.699214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.699279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.699565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.699629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.699888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.699954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.700246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.700311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.700607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.700681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.700974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.701041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.701233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.701298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.701507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.701571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.701808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.701907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.702174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.702239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.702445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.702510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.702719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.702784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.703092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.703158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.703434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.703499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.703694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.703759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.704070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.704136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.704399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.704463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.704703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.646 [2024-11-05 12:51:50.704769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.646 qpair failed and we were unable to recover it. 00:37:21.646 [2024-11-05 12:51:50.705097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.705163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.705457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.705523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.705779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.705843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.706132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.706196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.706399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.706464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.706702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.706768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.707004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.707071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.707316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.707381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.707689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.707754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.708010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.708076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.708284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.708349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.708636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.708700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.708981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.709048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.709279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.709354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.709650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.709713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.710003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.710070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.710289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.710353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.710643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.710707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.710988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.711250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.711315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.711601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.711665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.711957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.712023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.712291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.712356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.712551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.712615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.712828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.712914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.713175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.713240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.713438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.713502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.713757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.647 [2024-11-05 12:51:50.713821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.647 qpair failed and we were unable to recover it. 00:37:21.647 [2024-11-05 12:51:50.714075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.714140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.714331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.714396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.714620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.714686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.714940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.715006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.715256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.715322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.715540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.715604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.715852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.715932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.716138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.716202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.716407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.716470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.716689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.716752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.717026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.717094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.717380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.717445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.717708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.717772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.718041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.718107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.718333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.718399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.718626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.718690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.718943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.719009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.719253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.719317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.719553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.719616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.719892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.719958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.720216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.720282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.720540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.720603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.720819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.720898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.721119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.721183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.721471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.721536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.721797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.721887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.722119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.722184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.722422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.722487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.722730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.722794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.723035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.723101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.723388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.648 [2024-11-05 12:51:50.723453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.648 qpair failed and we were unable to recover it. 00:37:21.648 [2024-11-05 12:51:50.723743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.723806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.724090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.724157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.724368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.724436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.724650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.724714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.724921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.725008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.725277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.725341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.725583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.725648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.725889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.725957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.726212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.726277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.726544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.726609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.726877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.726943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.727228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.727292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.727500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.727565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.727805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.727886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.728144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.728210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.728453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.728518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.728802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.728886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.729136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.729201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.729435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.729501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.729765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.729830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.730143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.730209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.730422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.730487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.730754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.730827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.731111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.731176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.731416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.731480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.731721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.731785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.732059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.732126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.732411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.649 [2024-11-05 12:51:50.732477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.649 qpair failed and we were unable to recover it. 00:37:21.649 [2024-11-05 12:51:50.732659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.732722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.732964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.733030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.733290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.733356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.733607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.733671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.733922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.733989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.734188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.734253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.734509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.734573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.734830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.734913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.735179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.735245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.735497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.735561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.735872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.735937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.736146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.736212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.736466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.736530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.736816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.736900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.737120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.737185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.737434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.737498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.737800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.737884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.738177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.738241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.738481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.738545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.738781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.738843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.739069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.739134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.739361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.739435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.739687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.739751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.740024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.740090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.740307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.740371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.740611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.740674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.740966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.741034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.741331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.741396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.741683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.741747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.742002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.742068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.650 [2024-11-05 12:51:50.742369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.650 [2024-11-05 12:51:50.742433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.650 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.742688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.742751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.742954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.743019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.743262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.743328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.743615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.743680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.743980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.744046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.744291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.744356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.744645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.744710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.744967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.745034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.745248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.745313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.745605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.745668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.745906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.745972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.746243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.746307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.746502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.746567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.746830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.746910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.747196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.747262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.747562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.747625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.747875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.747941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.748205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.748278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.748570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.748633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.748843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.748942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.749182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.749246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.749478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.749541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.749788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.749852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.750088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.750152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.750387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.750452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.750740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.750804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.751049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.751114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.751353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.751418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.751669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.751732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.752038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.752280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.752345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.651 qpair failed and we were unable to recover it. 00:37:21.651 [2024-11-05 12:51:50.752562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.651 [2024-11-05 12:51:50.752626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.752881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.752947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.753214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.753432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.753496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.753734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.753799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.754028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.754093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.754309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.754374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.754612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.754677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.754945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.755010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.755211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.755276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.755484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.755549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.755802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.755885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.756151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.756215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.756502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.756580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.756845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.756930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.757227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.757292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.757549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.757613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.757912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.757978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.758226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.758290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.758538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.758602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.758890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.758957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.759257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.759320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.759518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.759583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.759881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.759947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.760157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.760221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.760509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.760573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.760827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.761218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.761415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.761480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.761691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.761757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.762071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.762138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.762399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.762464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.762708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.762772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.763071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.763282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.652 [2024-11-05 12:51:50.763346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.652 qpair failed and we were unable to recover it. 00:37:21.652 [2024-11-05 12:51:50.763606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.763672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.763925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.763993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.764188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.764252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.764503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.764567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.764815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.764895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.765191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.765256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.765509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.765575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.765844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.765923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.766214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.766279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.766492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.766776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.766840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.767120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.767184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.767481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.767546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.767792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.767857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.768105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.768170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.768423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.768489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.768709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.768783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.769103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.769169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.769429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.769495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.769746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.769822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.770090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.770156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.770360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.770426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.770677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.770742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.770983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.771049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.771292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.771358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.771657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.771721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.771969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.772037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.772294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.772360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.653 [2024-11-05 12:51:50.772660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.653 qpair failed and we were unable to recover it. 00:37:21.653 [2024-11-05 12:51:50.772952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.773018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.773228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.773293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.773542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.773607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.773835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.773917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.774153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.774218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.774520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.774584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.774831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.774917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.775195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.775479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.775543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.775786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.775850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.776094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.776159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.776395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.776459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.776761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.776826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.777111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.777175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.777469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.777533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.777829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.777915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.778215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.778279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.778516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.778590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.778836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.778923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.779153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.779217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.779442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.779506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.779771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.779836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.780174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.780239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.780445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.780512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.780807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.780893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.781215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.654 [2024-11-05 12:51:50.781279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.654 qpair failed and we were unable to recover it. 00:37:21.654 [2024-11-05 12:51:50.781497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.781561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.781852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.781938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.782188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.782252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.782486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.782549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.782835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.782920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.783176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.783480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.783546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.783741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.783804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.784059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.784125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.784377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.784442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.784657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.784721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.784981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.785047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.785286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.785352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.785573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.785637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.785845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.785924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.786217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.786283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.786583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.786647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.786889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.786954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.787202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.787266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.787565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.787630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.787841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.787939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.788160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.788524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.788589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.788896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.788964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.789257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.789322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.789651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.789929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.789995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.790244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.790307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.790589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.790653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.790874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.790943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.791173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.791238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.791526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.791590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.791809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.791887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.792136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.792199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.655 [2024-11-05 12:51:50.792448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.655 [2024-11-05 12:51:50.792513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.655 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.792754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.793161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.793456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.793521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.793737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.793801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.794023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.794089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.794347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.794411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.794618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.794682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.794933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.795000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.795195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.795262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.795487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.795551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.795836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.795934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.796244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.796309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.796523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.796587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.796839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.796922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.797136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.797201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.797415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.797480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.797736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.797799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.798060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.798126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.798312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.798375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.798655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.798719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.798922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.798989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.799233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.799297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.799548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.799612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.799874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.799942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.800234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.800308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.800561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.800626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.800839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.800920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.801148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.801212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.801450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.801514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.801727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.801793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.656 qpair failed and we were unable to recover it. 00:37:21.656 [2024-11-05 12:51:50.802021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.656 [2024-11-05 12:51:50.802087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.802374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.802439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.802748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.802811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.803124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.803189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.803456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.803522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.803764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.803826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.804091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.804157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.804386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.804451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.804722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.804785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.805050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.805117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.805406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.805472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.805690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.805754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.806065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.806131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.806370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.806437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.806687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.806752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.807015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.807082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.807339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.807402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.807698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.807763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.808040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.808106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.808372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.808436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.808689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.808754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.809019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.809101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.809354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.809419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.809704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.809768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.810047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.810113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.810356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.810420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.810707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.810771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.811008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.811075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.811323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.811387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.811682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.811746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.811960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.812027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.812320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.812384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.812674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.657 [2024-11-05 12:51:50.812737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.657 qpair failed and we were unable to recover it. 00:37:21.657 [2024-11-05 12:51:50.812999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.813065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.813309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.813376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.813595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.813661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.813892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.813958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.814153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.814217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.814425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.814488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.814728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.814793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.815110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.815175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.815376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.815441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.815651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.815713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.815944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.816009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.816245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.816310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.816567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.816630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.816921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.816987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.817244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.817310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.817557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.817632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.817926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.817993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.818255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.818320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.818545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.818609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.818898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.818965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.819176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.819240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.819482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.819554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.819878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.819945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.820156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.820220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.820431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.820497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.820727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.820793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.821086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.821153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.821381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.821446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.821703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.821766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.822091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.822159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.822459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.822523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.822725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.822789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.823061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.823127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.823365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.658 [2024-11-05 12:51:50.823430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.658 qpair failed and we were unable to recover it. 00:37:21.658 [2024-11-05 12:51:50.823632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.823696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.823948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.824268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.824335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.824534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.824598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.824854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.824931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.825219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.825579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.825642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.825933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.825999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.826285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.826350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.826620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.826685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.826911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.826977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.827184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.827248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.827534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.827599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.827893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.827960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.828170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.828236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.828521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.828586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.828895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.828962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.829213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.829278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.829558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.829622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.829903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.829970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.830193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.830259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.830483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.830548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.830832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.830922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.831230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.831295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.831588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.831652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.831909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.831977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.832242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.832305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.832552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.832619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.832841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.832946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.659 [2024-11-05 12:51:50.833200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.659 [2024-11-05 12:51:50.833264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.659 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.833458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.833522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.833765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.833831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.834122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.834187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.834447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.834511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.834796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.834881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.835133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.835197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.835434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.835499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.835700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.835765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.835982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.836049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.836335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.836399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.836595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.836894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.836961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.837252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.837316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.837560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.837624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.837917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.837983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.838239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.838303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.838604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.838667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.838918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.838985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.839242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.839309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.839560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.839634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.839900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.839965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.660 [2024-11-05 12:51:50.840170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.660 [2024-11-05 12:51:50.840235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.660 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.840482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.840546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.840825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.840923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.841169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.841232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.841472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.841536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.841828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.841916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.842133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.842198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.842448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.842513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.842792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.842857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.843091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.843156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.843429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.843495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.843748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.843812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.844107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.844173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.844414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.844479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.844718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.844782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.845092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.845158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.845447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.845511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.845798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.845880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.846184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.846248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.846551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.846615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.846855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.847156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.847221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.847421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.847488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.847730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.847793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.848066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.848132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.848375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.848451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.848760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.849089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.849155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.849374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.661 [2024-11-05 12:51:50.849437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.661 qpair failed and we were unable to recover it. 00:37:21.661 [2024-11-05 12:51:50.849714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.849778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.850000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.850067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.850322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.850385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.850622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.850685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.850924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.850992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.851199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.851264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.851519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.851760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.851825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.852130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.852195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.852483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.852547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.852805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.852886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.853150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.853213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.853531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.853778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.853842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.854087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.854151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.854394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.854458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.854701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.854765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.855066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.855132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.855372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.855437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.855669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.855732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.855954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.856021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.856312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.856376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.856607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.856689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.856979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.857045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.857307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.857372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.857667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.857730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.857953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.858018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.858244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.662 [2024-11-05 12:51:50.858308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.662 qpair failed and we were unable to recover it. 00:37:21.662 [2024-11-05 12:51:50.858562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.663 [2024-11-05 12:51:50.858626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.663 qpair failed and we were unable to recover it. 00:37:21.663 [2024-11-05 12:51:50.858883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.663 [2024-11-05 12:51:50.858949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.663 qpair failed and we were unable to recover it. 00:37:21.663 [2024-11-05 12:51:50.859202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.663 [2024-11-05 12:51:50.859266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.663 qpair failed and we were unable to recover it. 00:37:21.663 [2024-11-05 12:51:50.859516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.663 [2024-11-05 12:51:50.859578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.663 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.859832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.859927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.860161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.860226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.860485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.860549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.860758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.860821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.861085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.861151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.861412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.861476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.861673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.861744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.862033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.862105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.862374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.862438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.862628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.862692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.862982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.863048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.863301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.863365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.863656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.863720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.863977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.864042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.864293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.864356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.864598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.864662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.864905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.864971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.865261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.865326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.865615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.865679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.865957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.941 [2024-11-05 12:51:50.866023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.941 qpair failed and we were unable to recover it. 00:37:21.941 [2024-11-05 12:51:50.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.866352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.866595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.866659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.866918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.866983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.867231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.867295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.867583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.867647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.867896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.868244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.868308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.868551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.868615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.868912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.868978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.869235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.869300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.869537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.869601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.869852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.869935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.870231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.870306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.870598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.870662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.870912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.870979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.871269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.871334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.871596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.871660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.871910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.871976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.872297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.872580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.872644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.872897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.872963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.873203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.873268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.873524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.873588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.873791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.873856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.874202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.874465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.874527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.874777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.874842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.875149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.875214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.875451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.875515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.875761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.875827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.876107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.876173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.876453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.876518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.876748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.876812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.877106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.877170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.877428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.877492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.877739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.877803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.878077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.878142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.878409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.878473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.942 [2024-11-05 12:51:50.878677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.942 [2024-11-05 12:51:50.878741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.942 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.879032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.879108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.879406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.879472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.879831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.880067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.880131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.880411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.880474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.880681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.880747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.881053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.881119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.881370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.881434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.881685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.881750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.882013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.882314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.882379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.882584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.882648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.882933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.882999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.883257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.883322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.883542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.883606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.883888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.883954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.884250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.884559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.884623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.884886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.884952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.885146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.885209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.885461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.885526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.885776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.885840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.886117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.886181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.886465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.886528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.886779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.886843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.887151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.887216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.887425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.887489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.887687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.887750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.888033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.888098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.888300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.888365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.888603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.888667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.888927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.888994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.889248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.889313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.889556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.889622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.889855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.889937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.890183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.890251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.890513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.890577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.890834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.890919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.943 qpair failed and we were unable to recover it. 00:37:21.943 [2024-11-05 12:51:50.891141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.943 [2024-11-05 12:51:50.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.891487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.891552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.891804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.891883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.892143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.892207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.892510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.892801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.892900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.893150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.893214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.893506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.893570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.893824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.893912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.894205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.894269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.894511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.894575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.894878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.894946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.895247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.895310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.895550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.895613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.895852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.895937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.896243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.896562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.896626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.896888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.896924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.897045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.897078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.897215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.897249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.897468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.897536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.897793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.897857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.898014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.898048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.898211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.898550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.898614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.898910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.898955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.899054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.899088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.899284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.899348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.899632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.899697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.899964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.899998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.900135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.900226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.900482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.900547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.900839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.900927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.901097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.901131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.901385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.901450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.901696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.901759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.901984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.902019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.902156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.902208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.902480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.944 qpair failed and we were unable to recover it. 00:37:21.944 [2024-11-05 12:51:50.902769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.944 [2024-11-05 12:51:50.902833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.903009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.903044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.903154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.903188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.903449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.903513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.903726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.903792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.904012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.904048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.904167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.904228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.904481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.904544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.904849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.904934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.905053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.905086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.905202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.905236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.905493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.905558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.905849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.905930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.906049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.906083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.906190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.906224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.906371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.906435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.906719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.906783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.907037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.907071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.907302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.907377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.907615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.907680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.907966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.908001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.908117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.908151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.908316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.908623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.908687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.908932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.908967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.909110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.909145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.909281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.909360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.909655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.909719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.909940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.909974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.910119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.910152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.910370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.910434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.910723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.910787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.911006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.911041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.911156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.911190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.911329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.945 [2024-11-05 12:51:50.911402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.945 qpair failed and we were unable to recover it. 00:37:21.945 [2024-11-05 12:51:50.911688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.911752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.911979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.912014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.912126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.912159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.912261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.912295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.912481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.912545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.912842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.913095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.913129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.913275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.913308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.913447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.913480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.913699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.913763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.913986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.914025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.914146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.914180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.914327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.914361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.914613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.914677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.914941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.914977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.915118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.915153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.915419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.915482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.915723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.915789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.916011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.916046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.916162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.916242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.916531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.916595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.916838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.916932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.917080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.917114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.917223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.917537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.917601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.917816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.917914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.918084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.918118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.918309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.918373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.918594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.918658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.918940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.918975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.919130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.919163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.919292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.919325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.919514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.919580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.919831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.919921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.920033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.920067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.920213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.920272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.920507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.920571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.946 [2024-11-05 12:51:50.920908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.946 [2024-11-05 12:51:50.920943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.946 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.921062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.921096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.921342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.921581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.921645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.921890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.921958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.922199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.922263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.922486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.922550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.922758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.922821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.923085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.923150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.923374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.923438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.923726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.923790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.924065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.924132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.924379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.924413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.924580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.924613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.924802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.924886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.925110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.925174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.925424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.925488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.925771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.925836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.926109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.926175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.926365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.926430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.926693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.926757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.927018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.927086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.927272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.927336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.927564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.927627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.927848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.927933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.928174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.928238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.928535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.928599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.928925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.928992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.929238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.929302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.929532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.929566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.929825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.929921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.930034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.930068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.930210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.930244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.930512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.930577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.930829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.930921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.931038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.931072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.931210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.931244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.931473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.931536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.931792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.931856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.947 [2024-11-05 12:51:50.932069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.947 [2024-11-05 12:51:50.932104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.947 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.932350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.932413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.932621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.932701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.932951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.932987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.933105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.933138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.933278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.933311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.933491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.933555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.933776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.933839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.934030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.934064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.934251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.934314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.934561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.934637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.934882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.935096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.935130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.935244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.935277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.935444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.935478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.935590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.935625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.935853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.935929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.936075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.936108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.936321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.936384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.936676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.936740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.936989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.937023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.937235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.937300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.937538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.937612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.937853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.937896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.938019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.938053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.938248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.938282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.938432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.938465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.938714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.938777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.939020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.939192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.939231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.939348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.939381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.939533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.939567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.939776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.940005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.940039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.940155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.940189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.940320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.940378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.940639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.940698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.940927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.940962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.941106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.941160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.948 qpair failed and we were unable to recover it. 00:37:21.948 [2024-11-05 12:51:50.941352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.948 [2024-11-05 12:51:50.941412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.941659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.941719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.941949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.941983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.942145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.942179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.942427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.942461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.942626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.942660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.942900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.942981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.943270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.943347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.943587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.943645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.943840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.943934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.944210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.944269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.944526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.944559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.944703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.944736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.944978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.945040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.945279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.945355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.945584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.945643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.945884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.945945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.946236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.946270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.946414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.946448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.946555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.946590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.946807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.946840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.947046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.947126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.947395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.947473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.947742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.947802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.948071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.948149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.948445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.948522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.948787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.948846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.949125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.949203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.949412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.949488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.949695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.949755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.949976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.950056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.950377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.950453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.950728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.950762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.950904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.950961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.951258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.951336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.951572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.951648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.951855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.951950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.952260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.952294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.952430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.952464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.949 qpair failed and we were unable to recover it. 00:37:21.949 [2024-11-05 12:51:50.952667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.949 [2024-11-05 12:51:50.952727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.952993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.953028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.953177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.953210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.953324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.953358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.953472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.953506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.953651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.953684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.953885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.953946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.954134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.954194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.954377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.954437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.954691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.954724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.954872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.954907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.955212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.955272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.955521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.955581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.955798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.955857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.956115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.956193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.956396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.956475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.956690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.956765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.956980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.957042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.957285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.957345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.957518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.957586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.957818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.957916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.958112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.958171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.958406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.958464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.958708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.958767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.959038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.959117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.959383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.959466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.959740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.959798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.960087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.960166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.960460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.960522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.960782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.960843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.961101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.961179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.961485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.961563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.961839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.961889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.962009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.962043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.950 [2024-11-05 12:51:50.962235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.950 [2024-11-05 12:51:50.962312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.950 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.962549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.962626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.962877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.962950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.963212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.963289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.963584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.963662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.963902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.963963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.964153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.964245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.964474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.964555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.964753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.964812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.965026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.965103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.965363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.965440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.965676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.965734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.966048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.966304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.966337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.966522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.966582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.966797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.966856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.967119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.967207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.967413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.967490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.967774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.967834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.968123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.968200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.968469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.968530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.968766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.968827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.969152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.969229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.969430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.969509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.969728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.969788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.970196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.970276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.970530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.970831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.970876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.971003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.971037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.971251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.971334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.971606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.971682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.971920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.971982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.972249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.972283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.972396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.972429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.972612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.972671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.972917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.972978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.973212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.973289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.973484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.973562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.973779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.973840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.951 qpair failed and we were unable to recover it. 00:37:21.951 [2024-11-05 12:51:50.974099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.951 [2024-11-05 12:51:50.974187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.974414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.974473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.974677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.974737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.975032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.975111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.975375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.975451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.975705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.975739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.975882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.975943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.976195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.976271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.976531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.976565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.976704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.976739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.976933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.977016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.977270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.977348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.977625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.977686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.977982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.978060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.978390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.978467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.978731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.978764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.978908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.978960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.979196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.979229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.979366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.979400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.979595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.979656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.979924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.980216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.980293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.980513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.980572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.980827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.980868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.981007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.981059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.981303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.981363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.981519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.981579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.981800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.981890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.982164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.982243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.982488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.982563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.982797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.982856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.983181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.983232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.983493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.983571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.983801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.983879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.984185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.984264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.984541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.984600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.984825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.984902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.985221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.985280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.985514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.952 [2024-11-05 12:51:50.985591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.952 qpair failed and we were unable to recover it. 00:37:21.952 [2024-11-05 12:51:50.985820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.985897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.986132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.986166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.986280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.986315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.986534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.986568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.986706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.986741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.986977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.987056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.987275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.987354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.987597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.987675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.987938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.988018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.988289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.988323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.988470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.988503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.988743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.988802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.989103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.989164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.989422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.989498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.989694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.989755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.990037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.990116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.990437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.990516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.990756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.990815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.991041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.991119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.991371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.991448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.991674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.991733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.991990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.992025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.992164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.992198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.992295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.992329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.992449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.992483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.992683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.992745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.993054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.993088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.993234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.993268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.993473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.993552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.993745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.993813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.994098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.994176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.994444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.994522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.994789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.994848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.995119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.995201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.995398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.995477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.995722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.995756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.995924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.995959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.996073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.996107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.996329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.953 [2024-11-05 12:51:50.996406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.953 qpair failed and we were unable to recover it. 00:37:21.953 [2024-11-05 12:51:50.996695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.996754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.996985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.997064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.997364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.997441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.997707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.997741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.997888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.997952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.998248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.998282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.998418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.998452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.998676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.998735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.998996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.999073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.999374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.999451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:50.999728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:50.999787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.000029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.000092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.000313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.000393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.000658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.000692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.000827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.000873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.001079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.001157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.001456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.001533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.001739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.001808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.002087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.002121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.002254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.002287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.002505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.002538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.002709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.002743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.002995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.003074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.003294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.003372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.003592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.003651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.003898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.003933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.004042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.004076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.004216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.004250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.004510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.004571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.004839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.004914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.005215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.005292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.005559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.005636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.005835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.005928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.006195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.006254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.006550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.006626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.006907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.006969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.007236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.954 [2024-11-05 12:51:51.007298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.954 qpair failed and we were unable to recover it. 00:37:21.954 [2024-11-05 12:51:51.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.007562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.007704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.007738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.008010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.008088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.008397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.008474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.008664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.008723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.008981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.009015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.009128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.009162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.009443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.009476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.009611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.009645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.009822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.009896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.010131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.010190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.010450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.010510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.010696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.010757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.010971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.011032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.011221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.011282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.011550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.011611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.011851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.011927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.012204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.012263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.012483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.012576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.012851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.012928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.013195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.013271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.013541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.013618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.013922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.013984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.014205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.014283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.014534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.014610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.014889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.014950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.015214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.015248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.015394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.015658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.015719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.015967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.016046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.016340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.016373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.016512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.016545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.016779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.016839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.017154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.017239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.017571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.017647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.955 [2024-11-05 12:51:51.017893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.955 [2024-11-05 12:51:51.017955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.955 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.018247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.018323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.018636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.018713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.018966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.019046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.019295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.019372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.019625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.019702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.019949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.020028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.020245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.020325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.020615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.020692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.020927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.021000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.021271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.021304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.021441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.021474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.021627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.021660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.021928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.021999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.022194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.022255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.022483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.022543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.022814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.022883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.023143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.023219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.023487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.023548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.023796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.023829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.023987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.024054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.024300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.024334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.024502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.024570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.024849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.024927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.025180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.025262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.025551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.025627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.025927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.025962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.026111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.026145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.026325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.026402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.026698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.026775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.026997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.027031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.027144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.027177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.027345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.027421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.027590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.027652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.027879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.027940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.028173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.028235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.028482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.028559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.028915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.028997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.029281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.029359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.956 qpair failed and we were unable to recover it. 00:37:21.956 [2024-11-05 12:51:51.029574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.956 [2024-11-05 12:51:51.029650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.029905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.029945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.030106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.030176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.030404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.030480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.030684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.030744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.031013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.031047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.031192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.031225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.031433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.031510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.031749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.031810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.032108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.032168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.032401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.032477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.032719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.032753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.032854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.032896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.033023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.033056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.033288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.033321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.033460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.033493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.033607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.033640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.033780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.033813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.034057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.034135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.034382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.034460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.034721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.034780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.035091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.035153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.035444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.035521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.035747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.035806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.036090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.036170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.036377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.036456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.036666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.036699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.036846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.036890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.037137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.037177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.037315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.037348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.037568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.037627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.037898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.037959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.038253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.038287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.038457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.038491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.038701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.038734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.038876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.038910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.039132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.039166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.039335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.039368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.039546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.957 [2024-11-05 12:51:51.039624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.957 qpair failed and we were unable to recover it. 00:37:21.957 [2024-11-05 12:51:51.039894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.039956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.040219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.040252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.040394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.040427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.040685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.040763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.041081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.041159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.041398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.041449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.041723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.041784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.042115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.042193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.042455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.042516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.042756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.042816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.043077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.043154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.043379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.043456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.043654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.043714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.043918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.043980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.044271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.044607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.044684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.044889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.044949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.045227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.045288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.045536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.045613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.045833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.045906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.046148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.046226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.046434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.046514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.046745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.046804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.047053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.047132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.047325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.047402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.047636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.047694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.047897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.047960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.048196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.048256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.048504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.048582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.048830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.048903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.049276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.049380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.049679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.049749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.050044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.050109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.050421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.050489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.050827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.050939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.051253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.051320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.051583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.051650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.051918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.051980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.052222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.958 [2024-11-05 12:51:51.052305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.958 qpair failed and we were unable to recover it. 00:37:21.958 [2024-11-05 12:51:51.052590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.052656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.052919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.052981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.053293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.053375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.053661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.053728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.054000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.054077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.054313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.054379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.054640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.054707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.054982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.055020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 816016 Killed "${NVMF_APP[@]}" "$@" 00:37:21.959 [2024-11-05 12:51:51.055190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.055225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.055346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.055381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.055505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.055540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:21.959 [2024-11-05 12:51:51.055653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.055689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:21.959 [2024-11-05 12:51:51.055830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.055874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:21.959 [2024-11-05 12:51:51.055988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.056024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:21.959 [2024-11-05 12:51:51.056199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.056235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.056339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.056379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.056530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.056566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.056705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.056740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.056855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.056900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.057054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.057089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.057232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.057268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.057416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.057451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.057593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.057628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.057765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.057800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.057931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.057967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.058073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.058109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.058257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.058292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.058433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.058467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.058637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.058672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.058814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.058849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.059014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.959 [2024-11-05 12:51:51.059050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.959 qpair failed and we were unable to recover it. 00:37:21.959 [2024-11-05 12:51:51.059194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.059229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.059377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.059412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.059544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.059579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.059725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.059759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.059900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.059936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.060076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.060112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.060283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.060318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.060437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.060471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.060586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.060622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=816573 00:37:21.960 [2024-11-05 12:51:51.060762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.060799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 816573 00:37:21.960 [2024-11-05 12:51:51.060932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.060969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.061116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 816573 ']' 00:37:21.960 [2024-11-05 12:51:51.061155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.960 [2024-11-05 12:51:51.061323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.061359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:21.960 [2024-11-05 12:51:51.061501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.061537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.960 [2024-11-05 12:51:51.061676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.061713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:21.960 [2024-11-05 12:51:51.061815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.061850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:21.960 [2024-11-05 12:51:51.062018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.062053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.062227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.062274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.062449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.062494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.062663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.062718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.062968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.063005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.063146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.063181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.063356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.063391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.063508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.063545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.063687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.063723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.063869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.063905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.064028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.064064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.064222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.064257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.064394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.064428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.064574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.064610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.064729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.064764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.064962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.064998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.065119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.065154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.960 [2024-11-05 12:51:51.065337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.960 [2024-11-05 12:51:51.065371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.960 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.065479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.065515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.065658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.065693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.065812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.065847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.065972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.066124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.066281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.066461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.066604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.066746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.066946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.066980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.067120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.067154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.067266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.067300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.067439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.067473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.067635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.067669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.067766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.067799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.067949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.068094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.068128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.068263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.068297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.068433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.068466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.068572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.068606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.068783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.068817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.068937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.068971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.069111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.069146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.069343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.069456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.069489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.069596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.069783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.069816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.069942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.069975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.070104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.070141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.070272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.070303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.070461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.070493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.070590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.070622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.070791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.070824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.070973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.071006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.071137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.071170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.071327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.071361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.071483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.071515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.961 [2024-11-05 12:51:51.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.961 [2024-11-05 12:51:51.071651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.961 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.071811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.071843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.071964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.071997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.072106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.072146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.072279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.072311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.072408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.072440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.072573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.072605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.072717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.072854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.072908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.073030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.073062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.073181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.073213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.073353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.073385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.073479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.073511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.073673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.073705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.073879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.074097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.074284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.074412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.074546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.074685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.074838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.074971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.075933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.075964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.076940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.076972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.077098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.077130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.077260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.077290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.962 [2024-11-05 12:51:51.077422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.962 [2024-11-05 12:51:51.077452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.962 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.077573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.077605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.077699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.077730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.077852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.077892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.078909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.078940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.079947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.079982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.080109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.080139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.080233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.080263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.080356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.080386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.080492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.080522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.080649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.080679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.080836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.080872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.081057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.081215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.081368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.081545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.081725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.081839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.081971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.082931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.082961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.083086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.963 [2024-11-05 12:51:51.083114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.963 qpair failed and we were unable to recover it. 00:37:21.963 [2024-11-05 12:51:51.083207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.083236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.083336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.083364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.083502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.083529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.083627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.083653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.083794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.083820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.083941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.083968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.084962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.084989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.085965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.086888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.086914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.087036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.087151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.087260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.087382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.087495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.964 [2024-11-05 12:51:51.087631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.964 qpair failed and we were unable to recover it. 00:37:21.964 [2024-11-05 12:51:51.087720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.087745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.087869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.087896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.087980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.088884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.088981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.089878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.089995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.090971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.090998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.091927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.091953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.965 qpair failed and we were unable to recover it. 00:37:21.965 [2024-11-05 12:51:51.092795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.965 [2024-11-05 12:51:51.092820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.092923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.092950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.093943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.093969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.094869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.094895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.095962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.095988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.096952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.096980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.966 [2024-11-05 12:51:51.097087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.966 [2024-11-05 12:51:51.097113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.966 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.097233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.097259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.097353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.097379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.097468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.097495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.097582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.097607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.097724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.097751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.097871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.097898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.098911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.098937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.099871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.099989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.100951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.100977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.101949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.967 [2024-11-05 12:51:51.101976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.967 qpair failed and we were unable to recover it. 00:37:21.967 [2024-11-05 12:51:51.102115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.102263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.102425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.102565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.102677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.102820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.102939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.102965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.103884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.103911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.104881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.104907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.105051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.105078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.105220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.105246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.105333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.105360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.105505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.105620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.968 [2024-11-05 12:51:51.105646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.968 qpair failed and we were unable to recover it. 00:37:21.968 [2024-11-05 12:51:51.105739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.105765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.105877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.105904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.105988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.106969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.106994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.107928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.107955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.108955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.969 [2024-11-05 12:51:51.108983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.969 qpair failed and we were unable to recover it. 00:37:21.969 [2024-11-05 12:51:51.109067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.109990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.110941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.110968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.111923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.111950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.112071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.112215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.112352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.112489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.112629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.970 [2024-11-05 12:51:51.112737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.970 qpair failed and we were unable to recover it. 00:37:21.970 [2024-11-05 12:51:51.112874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.112901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.113840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.113989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.114920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.114947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.115853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.115897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.116010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.116037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.116118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.116144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.116238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.116264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.116376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.971 [2024-11-05 12:51:51.116402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.971 [2024-11-05 12:51:51.116380] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:37:21.971 qpair failed and we were unable to recover it. 00:37:21.971 [2024-11-05 12:51:51.116465] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:21.971 [2024-11-05 12:51:51.116489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.116514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.116620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.116645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.116720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.116745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.116912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.116938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.117876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.117996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.118950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.118978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.972 [2024-11-05 12:51:51.119887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.972 qpair failed and we were unable to recover it. 00:37:21.972 [2024-11-05 12:51:51.119972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.119999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.120943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.120969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.121958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.121984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.122070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.122098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.122179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.122206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.122318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.973 [2024-11-05 12:51:51.122345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.973 qpair failed and we were unable to recover it. 00:37:21.973 [2024-11-05 12:51:51.122457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.122488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.122580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.122607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.122720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.122746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.122842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.122875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.122963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.122991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.123900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.123992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.124941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.124969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.974 [2024-11-05 12:51:51.125842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.974 [2024-11-05 12:51:51.125876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.974 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.125970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.125999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.126936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.126961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.127969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.127995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.128104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.128129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.128218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.128244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.128387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.128413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.128526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.128553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.128695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.128720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.128834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.128867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.975 [2024-11-05 12:51:51.129012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.975 [2024-11-05 12:51:51.129038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.975 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.129898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.129925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.130885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.130913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.131891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.131918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.132057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.132084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.132222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.132338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.132363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.132470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.976 [2024-11-05 12:51:51.132495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.976 qpair failed and we were unable to recover it. 00:37:21.976 [2024-11-05 12:51:51.132580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.132606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.132724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.132750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.132838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.132989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.133110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.133247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.133389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.133556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.133723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.133891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.133918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.134906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.134933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.135884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.135911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.136050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.977 [2024-11-05 12:51:51.136076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.977 qpair failed and we were unable to recover it. 00:37:21.977 [2024-11-05 12:51:51.136189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.136215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.136356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.136383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.136523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.136549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.136664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.136692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.136804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.136831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.136942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.136970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.137960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.137987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.138918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.138945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.139111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.139223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.139340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.139450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.139590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.978 [2024-11-05 12:51:51.139742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.978 qpair failed and we were unable to recover it. 00:37:21.978 [2024-11-05 12:51:51.139822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.139848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.139971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.139998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.140090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.140116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.140211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.140236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.140352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.140378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.140518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.140544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.140683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.140710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.140830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.140856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.141900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.141926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.142940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.142966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.143079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.979 [2024-11-05 12:51:51.143105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.979 qpair failed and we were unable to recover it. 00:37:21.979 [2024-11-05 12:51:51.143246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.143272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.143359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.143385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.143476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.143501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.143580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.143606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.143720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.143750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.143893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.143920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.144870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.144986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.145868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.145982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.146118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.146228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.146368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.146506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.146620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.980 [2024-11-05 12:51:51.146757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.980 [2024-11-05 12:51:51.146783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.980 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.146900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.146927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.147900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.147988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.148875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.148901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.149903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.149930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.150042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.150068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.150158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.150185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.981 qpair failed and we were unable to recover it. 00:37:21.981 [2024-11-05 12:51:51.150278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.981 [2024-11-05 12:51:51.150305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.150423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.150449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.150531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.150558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.150673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.150816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.150842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.150946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.150973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.151875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.151902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.152892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.152920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.153003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.153030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.153148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.153175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.153264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.153291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.982 qpair failed and we were unable to recover it. 00:37:21.982 [2024-11-05 12:51:51.153378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.982 [2024-11-05 12:51:51.153405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.153523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.153550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.153667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.153694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.153806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.153833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.153962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.153990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.154133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.154276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.154414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.154581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.154722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.154871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.154992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.155921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.155948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.156891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.156918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.157007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.157033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.157113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.157139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.983 [2024-11-05 12:51:51.157256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.983 [2024-11-05 12:51:51.157283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.983 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.157363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.157389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.157519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.157561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.157656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.157684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.157820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.157847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.157944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.157971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.158872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.158898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.159901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.159928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.984 [2024-11-05 12:51:51.160907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.984 qpair failed and we were unable to recover it. 00:37:21.984 [2024-11-05 12:51:51.160992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.985 [2024-11-05 12:51:51.161018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.985 qpair failed and we were unable to recover it. 00:37:21.985 [2024-11-05 12:51:51.161123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.985 [2024-11-05 12:51:51.161149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.985 qpair failed and we were unable to recover it. 00:37:21.985 [2024-11-05 12:51:51.161260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.985 [2024-11-05 12:51:51.161285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.985 qpair failed and we were unable to recover it. 00:37:21.985 [2024-11-05 12:51:51.161398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.985 [2024-11-05 12:51:51.161424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.985 qpair failed and we were unable to recover it. 00:37:21.985 [2024-11-05 12:51:51.161539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.985 [2024-11-05 12:51:51.161564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:21.985 qpair failed and we were unable to recover it. 00:37:21.985 [2024-11-05 12:51:51.161661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:21.985 [2024-11-05 12:51:51.161690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:21.985 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.161831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.161874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.161975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.162002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.162087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.162119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.162238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.162263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.162355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.162381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.162468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.162665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.267 [2024-11-05 12:51:51.162692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.267 qpair failed and we were unable to recover it. 00:37:22.267 [2024-11-05 12:51:51.162783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.162810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.162906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.162934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.163945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.163972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.164933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.164960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.165939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.165966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.166935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.166961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.167089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.167239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.167378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.167516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.167635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.268 [2024-11-05 12:51:51.167752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.268 qpair failed and we were unable to recover it. 00:37:22.268 [2024-11-05 12:51:51.167841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.167872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.167991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.168897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.168980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.169900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.169926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.170897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.170983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.171883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.171999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.172107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.172226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.172346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.172487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.172602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.269 [2024-11-05 12:51:51.172739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.269 [2024-11-05 12:51:51.172765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.269 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.172850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.172886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.173935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.173962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.174929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.174956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.175897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.175923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.176968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.176994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.177164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.177304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.177442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.177555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.177716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.177834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.270 [2024-11-05 12:51:51.177978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.270 [2024-11-05 12:51:51.178004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.270 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.178890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.178917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.179940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.179966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.180906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.180932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.181970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.181996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.182901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.182927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.271 [2024-11-05 12:51:51.183019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.271 [2024-11-05 12:51:51.183053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.271 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.183969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.183994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.184967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.184993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.185884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.185917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.186887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.186979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.187005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.187087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.187112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.187222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.187248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.272 [2024-11-05 12:51:51.187333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.272 [2024-11-05 12:51:51.187360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.272 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.187447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.187474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.187607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.187633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.187751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.187779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.187900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.187927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.188844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.188990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.189844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.189993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.190941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.190966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.191877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.191903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.192018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.192043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.192162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.192188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.192276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.192301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.192418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.192443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.192558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.273 [2024-11-05 12:51:51.192583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.273 qpair failed and we were unable to recover it. 00:37:22.273 [2024-11-05 12:51:51.192674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.192698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.192813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.192838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.192890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1306630 (9): Bad file descriptor 00:37:22.274 [2024-11-05 12:51:51.193067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.193233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.193343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.193485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.193654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.193770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.193917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.193946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.194965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.194991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.195878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.195976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.196884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.196978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.197096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.197273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.197398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.197548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.197664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.274 [2024-11-05 12:51:51.197772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.274 [2024-11-05 12:51:51.197798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.274 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.197886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.197913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.198855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.198912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.199936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.199961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.200883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.200917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.201906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.201993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.202018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.202110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.202136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.202281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.202306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.202417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.202443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.202533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.202563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.202669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.275 [2024-11-05 12:51:51.202695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.275 qpair failed and we were unable to recover it. 00:37:22.275 [2024-11-05 12:51:51.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.202896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.203915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.203999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.204117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.204262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.204418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.204588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.204701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.204872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.204898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.205945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.205971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:22.276 [2024-11-05 12:51:51.206358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.206926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.206953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.207048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.207073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.207180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.207205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.207295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.276 [2024-11-05 12:51:51.207321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.276 qpair failed and we were unable to recover it. 00:37:22.276 [2024-11-05 12:51:51.207419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.207444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.207558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.207582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.207684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.207724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.207830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.207866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.207976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.208149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.208315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.208458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.208573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.208748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.208883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.208911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.209894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.209992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.210969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.210996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.211155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.211181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.211317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.211345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.211524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.211552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.211642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.211671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.211811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.211940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.211973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.277 [2024-11-05 12:51:51.212895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.277 [2024-11-05 12:51:51.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.277 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.213855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.213976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.214948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.214974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.215122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.215240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.215413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.215538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.215707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.215849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.215993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.216881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.216922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.217934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.217964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.218053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.218081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.218202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.218228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.278 qpair failed and we were unable to recover it. 00:37:22.278 [2024-11-05 12:51:51.218324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.278 [2024-11-05 12:51:51.218351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.218439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.218466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.218554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.218580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.218671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.218696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.218777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.218803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.218911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.218937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.219938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.219964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.220882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.220913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.221863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.221891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.222885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.222979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.223008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.223134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.223160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.223272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.223299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.223384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.279 [2024-11-05 12:51:51.223410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.279 qpair failed and we were unable to recover it. 00:37:22.279 [2024-11-05 12:51:51.223528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.223556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.223667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.223692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.223778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.223804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.223937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.223963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.224969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.224994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.225954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.226898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.226930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.227878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.227993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.228018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.228108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.228136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.228234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.228259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.280 [2024-11-05 12:51:51.228342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.280 [2024-11-05 12:51:51.228371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.280 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.228492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.228519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.228633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.228660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.228744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.228770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.228889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.228927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.229955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.230956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.230982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.231848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.231978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.281 [2024-11-05 12:51:51.232789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.281 [2024-11-05 12:51:51.232828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.281 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.232939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.232969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.233936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.233963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.234883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.234997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.235955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.235983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.236933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.236959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.237908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.237995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.238021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.282 qpair failed and we were unable to recover it. 00:37:22.282 [2024-11-05 12:51:51.238106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.282 [2024-11-05 12:51:51.238133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.238284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.238310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.238427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.238454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.238574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.238600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.238696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.238724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.238840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.238891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.238991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.239870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.239897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.240894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.240985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.241920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.241947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.242956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.283 [2024-11-05 12:51:51.242982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.283 qpair failed and we were unable to recover it. 00:37:22.283 [2024-11-05 12:51:51.243106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.243209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.243326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.243458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.243576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.243706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.243874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.243907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.244838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.244873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.245885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.245926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.246938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.246965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.247911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.247937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.284 [2024-11-05 12:51:51.248044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.284 [2024-11-05 12:51:51.248069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.284 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.248237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.248347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.248470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.248621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.248731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.248873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.248999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.249026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.249197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.249339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.249365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.249504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.249542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.249674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.249706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.249834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.249874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.249986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.250142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.250265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.250432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.250612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.250748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.250912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.250942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.251037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.251065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.251167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.251195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.253874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.253917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.254053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.254212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.254382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.254562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.254695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.254848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.254964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.255004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.255157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.255195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.255294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.255321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.255407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.255433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.255517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.255543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.285 qpair failed and we were unable to recover it. 00:37:22.285 [2024-11-05 12:51:51.255623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.285 [2024-11-05 12:51:51.255649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.255785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.255904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.255931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.256917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.256946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.257905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.258879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.258906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.259880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.259906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.260047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.260073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.286 qpair failed and we were unable to recover it. 00:37:22.286 [2024-11-05 12:51:51.260196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.286 [2024-11-05 12:51:51.260222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.260304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.260329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.260444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.260469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.260557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.260582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.260699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.260727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.260816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.260844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.260939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.260966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.261891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.261985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:22.287 [2024-11-05 12:51:51.262894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.262910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:22.287 [2024-11-05 12:51:51.262922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 wit[2024-11-05 12:51:51.262925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is theh addr=10.0.0.2, port=4420 00:37:22.287 only 00:37:22.287 [2024-11-05 12:51:51.262939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.262950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:22.287 [2024-11-05 12:51:51.263019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.263903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.263931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.264049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.264077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.264162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.264189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.264308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.264335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.264475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.264503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.264602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.287 [2024-11-05 12:51:51.264631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.287 qpair failed and we were unable to recover it. 00:37:22.287 [2024-11-05 12:51:51.264613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:22.287 [2024-11-05 12:51:51.264637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:22.288 [2024-11-05 12:51:51.264747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.264773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.264852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.264884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.264907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:22.288 [2024-11-05 12:51:51.264912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:22.288 [2024-11-05 12:51:51.264979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.265887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.265974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.266921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.266947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.267959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.267987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.268962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.268989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.269081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.269108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.288 qpair failed and we were unable to recover it. 00:37:22.288 [2024-11-05 12:51:51.269214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.288 [2024-11-05 12:51:51.269241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.269324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.269351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.269447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.269473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.269557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.269583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.269667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.269694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.269808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.269834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.269920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.269948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.270891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.270979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.271889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.271916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.272946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.272972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.289 qpair failed and we were unable to recover it. 00:37:22.289 [2024-11-05 12:51:51.273710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.289 [2024-11-05 12:51:51.273735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.273820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.273850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.273945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.273971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.274890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.274976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.275999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.276952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.276978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.290 qpair failed and we were unable to recover it. 00:37:22.290 [2024-11-05 12:51:51.277734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.290 [2024-11-05 12:51:51.277758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.277835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.277866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.277968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.278943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.278970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.279894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.279983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.280881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.280974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.281892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.281977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.282005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.282094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.282121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.282205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.282231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.282345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.291 [2024-11-05 12:51:51.282371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.291 qpair failed and we were unable to recover it. 00:37:22.291 [2024-11-05 12:51:51.282448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.282473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.282591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.282617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.282693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.282719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.282796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.282821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.282909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.282942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.283916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.283944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.284877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.284989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.285936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.285975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.286896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.286923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.287022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.287051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.292 qpair failed and we were unable to recover it. 00:37:22.292 [2024-11-05 12:51:51.287133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.292 [2024-11-05 12:51:51.287160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.287273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.287411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.287522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.287634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.287799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.287914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.287992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.288953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.288978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.289909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.289990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.290916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.290942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.291056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.291256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.291462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.291572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.293 [2024-11-05 12:51:51.291708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.293 qpair failed and we were unable to recover it. 00:37:22.293 [2024-11-05 12:51:51.291790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.291815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.291910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.291936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.292924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.292951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.293871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.293899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.294965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.294992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.295967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.295993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.296069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.296095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.296211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.296241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.296326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.296353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.294 [2024-11-05 12:51:51.296467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.294 [2024-11-05 12:51:51.296494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.294 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.296589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.296616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.296694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.296720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.296808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.296837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.296930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.296957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.297917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.297945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.298877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.298912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.299905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.299931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.300017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.300043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.300130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.300157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.300246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.300272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.300354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.300381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.295 [2024-11-05 12:51:51.300465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.295 [2024-11-05 12:51:51.300491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.295 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.300588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.300615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.300698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.300726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.300820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.300848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.300930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.300957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.301966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.301993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.302923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.302949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.303897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.303985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.304925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.304952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.305066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.305092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.296 qpair failed and we were unable to recover it. 00:37:22.296 [2024-11-05 12:51:51.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.296 [2024-11-05 12:51:51.305197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.305278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.305305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.305379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.305404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.305486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.305513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.305633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.305659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.305740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.305767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.305866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.305896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.306901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.306982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.307889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.307972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.308898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.308983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.309911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.297 [2024-11-05 12:51:51.309941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.297 qpair failed and we were unable to recover it. 00:37:22.297 [2024-11-05 12:51:51.310021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.310927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.310954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.311892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.311921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.312965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.312995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.313953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.313993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.314115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.314143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.314257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.314284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.314376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.314403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.314483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.314511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.298 [2024-11-05 12:51:51.314599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.298 [2024-11-05 12:51:51.314626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.298 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.314720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.314748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.314835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.314869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.314952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.314978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.315892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.316881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.316908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.317901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.317927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.318027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.318183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.318293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.318408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.318551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.299 [2024-11-05 12:51:51.318659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.299 qpair failed and we were unable to recover it. 00:37:22.299 [2024-11-05 12:51:51.318743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.318770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.318881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.318908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.318992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.319892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.319921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.320942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.320969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.321900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.321927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.322951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.322979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.323070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.323097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.300 qpair failed and we were unable to recover it. 00:37:22.300 [2024-11-05 12:51:51.323178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.300 [2024-11-05 12:51:51.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.323284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.323310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.323423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.323450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.323534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.323560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.323647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.323674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.323789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.323815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.323932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.323959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.324977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.325879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.325907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.326920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.326946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.327052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.327165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.327284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.327403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.327548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.301 [2024-11-05 12:51:51.327658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.301 qpair failed and we were unable to recover it. 00:37:22.301 [2024-11-05 12:51:51.327740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.327767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.327849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.327883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.327968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.328944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.328971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.329891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.329919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.330917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.330944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.331896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.331977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.332003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.332096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.332123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.332238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.302 [2024-11-05 12:51:51.332264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.302 qpair failed and we were unable to recover it. 00:37:22.302 [2024-11-05 12:51:51.332341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.332366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.332449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.332476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.332611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.332651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.332746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.332774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.332856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.332890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.332971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.332997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.333935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.333962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.334961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.334988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.335914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.335941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.303 [2024-11-05 12:51:51.336880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.303 [2024-11-05 12:51:51.336909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.303 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.336992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.337918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.337946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.338893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.338921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.339939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.339966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.340959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.340986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.341123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.341150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.341232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.341260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.304 [2024-11-05 12:51:51.341344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.304 [2024-11-05 12:51:51.341372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.304 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.341484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.341511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.341606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.341645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.341742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.341770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.341853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.341968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.341997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.342954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.342982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.343824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.344892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.344975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.345915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.345942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.346056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.305 [2024-11-05 12:51:51.346082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.305 qpair failed and we were unable to recover it. 00:37:22.305 [2024-11-05 12:51:51.346171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.346311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.346443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.346556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.346668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.346811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.346925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.346956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.347963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.347990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.348845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.348894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.349870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.349898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.306 [2024-11-05 12:51:51.350863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.306 [2024-11-05 12:51:51.350891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.306 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.350990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.351972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.352944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.352970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.353932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.353960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.354968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.354994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.355102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.355135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.355220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.355246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.355336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.355362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.355444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.355470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.307 qpair failed and we were unable to recover it. 00:37:22.307 [2024-11-05 12:51:51.355555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.307 [2024-11-05 12:51:51.355582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.355681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.355708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.355789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.355815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.355938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.355966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.356934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.356964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.357905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.357932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.358899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.358926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.308 [2024-11-05 12:51:51.359696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.308 qpair failed and we were unable to recover it. 00:37:22.308 [2024-11-05 12:51:51.359784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.359813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.359948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.359978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.360944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.360972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.361915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.361942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.362884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.362995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.363902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.363987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.364013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.364122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.364148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.364237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.309 [2024-11-05 12:51:51.364263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.309 qpair failed and we were unable to recover it. 00:37:22.309 [2024-11-05 12:51:51.364344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.364369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.364454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.364482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.364565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.364593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.364679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.364710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.364798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.364825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.364944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.364971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.365958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.365986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.366870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.366897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.367851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.367890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.368944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.310 [2024-11-05 12:51:51.368970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.310 qpair failed and we were unable to recover it. 00:37:22.310 [2024-11-05 12:51:51.369080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.369908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.369935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.370930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.370968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.371943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.371969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.372911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.372943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.373023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.373049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.373155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.373181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.373262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.373288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.311 [2024-11-05 12:51:51.373362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.311 [2024-11-05 12:51:51.373387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.311 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.373465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.373491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.373572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.373599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.373685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.373711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.373797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.373826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.373927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.373966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.374971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.374998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.375942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.375968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.376877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.376904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.312 [2024-11-05 12:51:51.377931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.312 [2024-11-05 12:51:51.377959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.312 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.378907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.378994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.379938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.379965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.380889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.313 [2024-11-05 12:51:51.380917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.313 qpair failed and we were unable to recover it. 00:37:22.313 [2024-11-05 12:51:51.381002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.381905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.381983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.382857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.382893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.383911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.383991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.384904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.384986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.385012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.385101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.385127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.385233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.314 [2024-11-05 12:51:51.385259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.314 qpair failed and we were unable to recover it. 00:37:22.314 [2024-11-05 12:51:51.385367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.385393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.385477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.385503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.385583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.385609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.385686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.385716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.385828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.385855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.385953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.385979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.386943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.386970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.387857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.387895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.388912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.388949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.315 [2024-11-05 12:51:51.389812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.315 qpair failed and we were unable to recover it. 00:37:22.315 [2024-11-05 12:51:51.389891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.389918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.390927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.390953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.391838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:22.316 [2024-11-05 12:51:51.391960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.391986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:37:22.316 [2024-11-05 12:51:51.392107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:22.316 [2024-11-05 12:51:51.392238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.392264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.392339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:22.316 [2024-11-05 12:51:51.392365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.392451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.316 [2024-11-05 12:51:51.392477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.392560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.392585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.392670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.392696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.392776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.392802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.392907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.392933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.393915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.316 [2024-11-05 12:51:51.393941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.316 qpair failed and we were unable to recover it. 00:37:22.316 [2024-11-05 12:51:51.394029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.394921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.394951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.395961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.395986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.396968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.396995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.397911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.317 [2024-11-05 12:51:51.397988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.317 [2024-11-05 12:51:51.398014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.317 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.398948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.398974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.399947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.399987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.400944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.400975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.318 qpair failed and we were unable to recover it. 00:37:22.318 [2024-11-05 12:51:51.401958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.318 [2024-11-05 12:51:51.401984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.402907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.402937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.403898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.403980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.404943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.404971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.405953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.405979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.406059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.406085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.406204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.406230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.406304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.319 [2024-11-05 12:51:51.406330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.319 qpair failed and we were unable to recover it. 00:37:22.319 [2024-11-05 12:51:51.406406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.406432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.406509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.406535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.406618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.406647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.406731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.406761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.406845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.406882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.406966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.406993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.407908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.407991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.408902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.408935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.409952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.409979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.410059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.410085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.410169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.410196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.410337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.410363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.410443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.410468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.410550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.320 [2024-11-05 12:51:51.410577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.320 qpair failed and we were unable to recover it. 00:37:22.320 [2024-11-05 12:51:51.410663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.410689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.410809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.410837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.410941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.410969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:22.321 [2024-11-05 12:51:51.411186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:22.321 [2024-11-05 12:51:51.411368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.321 [2024-11-05 12:51:51.411505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.321 [2024-11-05 12:51:51.411641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.411958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.411984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.412875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.412904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.413920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.413948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.321 [2024-11-05 12:51:51.414884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.321 qpair failed and we were unable to recover it. 00:37:22.321 [2024-11-05 12:51:51.414969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.414996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.415906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.415934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.416911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.416993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.417920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.417949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.418061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.418088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.418195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.418222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.418304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.418332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.418419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.418450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.322 [2024-11-05 12:51:51.418533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.322 [2024-11-05 12:51:51.418561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.322 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.418640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.418667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.418755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.418781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.418871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.418897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.418975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.419884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.419979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.420917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.420944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.421950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.421978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.422959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.323 [2024-11-05 12:51:51.422992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.323 qpair failed and we were unable to recover it. 00:37:22.323 [2024-11-05 12:51:51.423074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.423896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.423980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.424910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.424950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.425964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.425990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.426934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.426964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.427047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.427073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.324 [2024-11-05 12:51:51.427183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.324 [2024-11-05 12:51:51.427209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.324 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.427290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.427316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.427428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.427455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.427567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.427593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.427681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.427709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.427798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.427825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.427946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.427974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.428904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.428930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.429998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.430905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.430934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.431029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.431056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.431142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.431252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.431279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.431390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.325 [2024-11-05 12:51:51.431417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.325 qpair failed and we were unable to recover it. 00:37:22.325 [2024-11-05 12:51:51.431497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.431522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.431615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.431655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.431740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.431768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.431845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.431879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.431990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.432894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.432920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.433892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.433919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.434837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.434965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.435866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.326 [2024-11-05 12:51:51.435982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.326 [2024-11-05 12:51:51.436008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.326 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.436939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.436965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.437887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.437917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.438936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.438965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.439968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.439994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.440080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.327 [2024-11-05 12:51:51.440105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.327 qpair failed and we were unable to recover it. 00:37:22.327 [2024-11-05 12:51:51.440224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.440334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.440440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.440556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.440672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.440808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.440958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.440985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.441951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.442884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.442986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.443903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.443931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.444030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.444069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.444231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.444270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.444355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.444382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.444497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.444523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.328 qpair failed and we were unable to recover it. 00:37:22.328 [2024-11-05 12:51:51.444605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.328 [2024-11-05 12:51:51.444631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.444707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.444733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.444823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.444852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.444952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.444979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.445970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.445996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.446887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.446915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.447897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.447982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.329 [2024-11-05 12:51:51.448932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.329 [2024-11-05 12:51:51.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.329 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.449969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.449996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.450900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.450927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.451966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.451993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.452742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 Malloc0 00:37:22.330 [2024-11-05 12:51:51.452868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.452919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.453008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.453037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.330 [2024-11-05 12:51:51.453130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.330 [2024-11-05 12:51:51.453156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.330 qpair failed and we were unable to recover it. 00:37:22.331 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.331 [2024-11-05 12:51:51.453243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.453270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.453345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:22.331 [2024-11-05 12:51:51.453375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.453457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.453484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.453569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.453595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.453675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.453700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.453789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.453815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.453908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.453934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.454927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.454967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.455919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.455997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:22.331 [2024-11-05 12:51:51.456593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.456966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.456994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.457082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.331 [2024-11-05 12:51:51.457110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.331 qpair failed and we were unable to recover it. 00:37:22.331 [2024-11-05 12:51:51.457200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.457343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.457454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.457566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.457706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.457846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.457972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.457999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.458932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.458959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.459920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.459949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.332 [2024-11-05 12:51:51.460880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.332 [2024-11-05 12:51:51.460907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.332 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.460991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.461934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.461962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.462892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.463916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.463943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.333 [2024-11-05 12:51:51.464747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 [2024-11-05 12:51:51.464832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:22.333 [2024-11-05 12:51:51.464959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.464986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.333 [2024-11-05 12:51:51.465072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.333 [2024-11-05 12:51:51.465100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.333 qpair failed and we were unable to recover it. 00:37:22.333 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.334 [2024-11-05 12:51:51.465192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.465312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.465417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.465533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.465672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.465777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.465914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.465940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.466932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.466958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.467912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.467941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.468873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.468902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.469013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.469040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.469128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.469155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.469277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.334 [2024-11-05 12:51:51.469304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.334 qpair failed and we were unable to recover it. 00:37:22.334 [2024-11-05 12:51:51.469386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.469413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.469501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.469529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.469635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.469662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.469745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.469857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.469973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.470896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.470975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.471899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.471991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.335 [2024-11-05 12:51:51.472831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.472870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.472961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:22.335 [2024-11-05 12:51:51.472987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.473081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.335 [2024-11-05 12:51:51.473107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.473186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.335 [2024-11-05 12:51:51.473213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.473294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.473323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.473402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.335 [2024-11-05 12:51:51.473434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.335 qpair failed and we were unable to recover it. 00:37:22.335 [2024-11-05 12:51:51.473515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.473542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.473635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.473662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.473751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.473778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.473853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.473886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.473963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.473989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.474914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.474941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.475939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.475966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.476949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.476978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.477061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.477087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.477166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.477193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.477283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.477310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.477404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.336 [2024-11-05 12:51:51.477433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.336 qpair failed and we were unable to recover it. 00:37:22.336 [2024-11-05 12:51:51.477558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.477587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.477678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.477704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.477785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.477811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.477895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.477922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.478890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.478918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.479929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.479968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.480776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.337 [2024-11-05 12:51:51.480906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.480934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.337 [2024-11-05 12:51:51.481025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.337 [2024-11-05 12:51:51.481135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.337 [2024-11-05 12:51:51.481258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.481368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.481487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.481588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.481712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.337 [2024-11-05 12:51:51.481737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.337 qpair failed and we were unable to recover it. 00:37:22.337 [2024-11-05 12:51:51.481853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.481890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.481982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.482896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.482923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47a8000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.338 [2024-11-05 12:51:51.483866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.338 qpair failed and we were unable to recover it. 00:37:22.338 [2024-11-05 12:51:51.483961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.483987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.484096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.484203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47ac000b90 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.484324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f47b4000b90 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.484461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.484564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:22.597 [2024-11-05 12:51:51.484681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f8690 with addr=10.0.0.2, port=4420 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.484790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.597 [2024-11-05 12:51:51.487416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.597 [2024-11-05 12:51:51.487547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.597 [2024-11-05 12:51:51.487575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.597 [2024-11-05 12:51:51.487591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.597 [2024-11-05 12:51:51.487603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.597 [2024-11-05 12:51:51.487638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.597 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:22.597 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.597 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:22.597 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.597 12:51:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 816099 00:37:22.597 [2024-11-05 12:51:51.497150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.597 [2024-11-05 12:51:51.497241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.597 [2024-11-05 12:51:51.497269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.597 [2024-11-05 12:51:51.497284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.597 [2024-11-05 12:51:51.497296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.597 [2024-11-05 12:51:51.497324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.507189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.597 [2024-11-05 12:51:51.507282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.597 [2024-11-05 12:51:51.507312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.597 [2024-11-05 12:51:51.507329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.597 [2024-11-05 12:51:51.507341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.597 [2024-11-05 12:51:51.507371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.517205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.597 [2024-11-05 12:51:51.517296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.597 [2024-11-05 12:51:51.517322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.597 [2024-11-05 12:51:51.517336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.597 [2024-11-05 12:51:51.517348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.597 [2024-11-05 12:51:51.517376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.527147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.597 [2024-11-05 12:51:51.527236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.597 [2024-11-05 12:51:51.527262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.597 [2024-11-05 12:51:51.527276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.597 [2024-11-05 12:51:51.527288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.597 [2024-11-05 12:51:51.527317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.597 qpair failed and we were unable to recover it. 00:37:22.597 [2024-11-05 12:51:51.537166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.597 [2024-11-05 12:51:51.537261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.537287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.537301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.537313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.537341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.547213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.547295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.547322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.547341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.547354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.547382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.557256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.557345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.557371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.557386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.557398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.557427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.567269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.567365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.567391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.567405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.567417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.567446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.577331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.577445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.577471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.577486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.577498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.577526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.587316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.587400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.587426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.587440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.587452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.587487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.597330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.597421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.597449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.597463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.597475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.597503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.607365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.607451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.607475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.607488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.607500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.607528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.617384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.617476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.617502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.617517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.617529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.617557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.627403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.627487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.627513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.627528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.627540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.627567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.637481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.637578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.637605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.637622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.637634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.637664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.647463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.647548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.647572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.647586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.647598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.647627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.657515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.657603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.598 [2024-11-05 12:51:51.657635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.598 [2024-11-05 12:51:51.657653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.598 [2024-11-05 12:51:51.657666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.598 [2024-11-05 12:51:51.657695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.598 qpair failed and we were unable to recover it. 00:37:22.598 [2024-11-05 12:51:51.667518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.598 [2024-11-05 12:51:51.667639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.667667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.667681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.667694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.667722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.677556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.677647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.677671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.677691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.677704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.677733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.687586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.687715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.687740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.687755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.687766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.687794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.697638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.697726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.697751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.697765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.697777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.697805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.707641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.707736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.707762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.707776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.707788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.707816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.717688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.717780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.717804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.717818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.717829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.717872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.727818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.727956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.727982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.727997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.728009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.728038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.737733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.737821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.737847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.737868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.737882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.737911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.747777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.747875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.747901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.747914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.747926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.747955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.757822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.757926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.757952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.757966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.757979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.758007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.767880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.767970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.767996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.768010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.768023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.768052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.777839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.777936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.777961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.777974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.777986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.778015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.787894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.787976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.599 [2024-11-05 12:51:51.788002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.599 [2024-11-05 12:51:51.788017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.599 [2024-11-05 12:51:51.788029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.599 [2024-11-05 12:51:51.788057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.599 qpair failed and we were unable to recover it. 00:37:22.599 [2024-11-05 12:51:51.797935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.599 [2024-11-05 12:51:51.798037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.600 [2024-11-05 12:51:51.798063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.600 [2024-11-05 12:51:51.798077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.600 [2024-11-05 12:51:51.798089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.600 [2024-11-05 12:51:51.798118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.600 qpair failed and we were unable to recover it. 00:37:22.600 [2024-11-05 12:51:51.807941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.600 [2024-11-05 12:51:51.808027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.600 [2024-11-05 12:51:51.808050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.600 [2024-11-05 12:51:51.808072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.600 [2024-11-05 12:51:51.808084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.600 [2024-11-05 12:51:51.808113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.600 qpair failed and we were unable to recover it. 00:37:22.600 [2024-11-05 12:51:51.817975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.600 [2024-11-05 12:51:51.818091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.600 [2024-11-05 12:51:51.818116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.600 [2024-11-05 12:51:51.818130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.600 [2024-11-05 12:51:51.818142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.600 [2024-11-05 12:51:51.818170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.600 qpair failed and we were unable to recover it. 00:37:22.600 [2024-11-05 12:51:51.828027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.600 [2024-11-05 12:51:51.828111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.600 [2024-11-05 12:51:51.828137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.600 [2024-11-05 12:51:51.828152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.600 [2024-11-05 12:51:51.828164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.600 [2024-11-05 12:51:51.828192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.600 qpair failed and we were unable to recover it. 00:37:22.858 [2024-11-05 12:51:51.838062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.858 [2024-11-05 12:51:51.838190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.858 [2024-11-05 12:51:51.838215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.858 [2024-11-05 12:51:51.838229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.858 [2024-11-05 12:51:51.838241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.858 [2024-11-05 12:51:51.838269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.858 qpair failed and we were unable to recover it. 00:37:22.858 [2024-11-05 12:51:51.848081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.858 [2024-11-05 12:51:51.848166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.858 [2024-11-05 12:51:51.848193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.858 [2024-11-05 12:51:51.848207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.858 [2024-11-05 12:51:51.848219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.858 [2024-11-05 12:51:51.848253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.858 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.858092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.858191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.858217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.858231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.858243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.858271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.868177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.868307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.868334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.868348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.868361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.868389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.878205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.878295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.878321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.878334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.878346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.878374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.888227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.888314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.888339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.888353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.888365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.888393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.898263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.898354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.898378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.898392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.898404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.898432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.908225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.908317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.908343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.908357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.908370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.908398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.918299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.918385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.918410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.918423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.918435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.918463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.928338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.928443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.928469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.928483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.928495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.928523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.938321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.938408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.938444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.938464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.938477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.938505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.948384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.948467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.948492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.948507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.948519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.948548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.958420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.958511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.958535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.958549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.958561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.958590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.968413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.968502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.968527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.968541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.968553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.859 [2024-11-05 12:51:51.968581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.859 qpair failed and we were unable to recover it. 00:37:22.859 [2024-11-05 12:51:51.978431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.859 [2024-11-05 12:51:51.978513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.859 [2024-11-05 12:51:51.978538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.859 [2024-11-05 12:51:51.978552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.859 [2024-11-05 12:51:51.978564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:51.978597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:51.988481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:51.988570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:51.988599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:51.988614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:51.988626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:51.988653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:51.998506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:51.998610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:51.998635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:51.998649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:51.998661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:51.998689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.008538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.008624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.008648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.008662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.008674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.008702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.018559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.018646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.018672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.018687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.018699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.018727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.028571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.028665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.028691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.028705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.028717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.028745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.038662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.038765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.038791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.038805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.038817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.038845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.048657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.048758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.048784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.048798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.048810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.048838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.058693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.058815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.058841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.058856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.058877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.058906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.068693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.068773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.068803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.068819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.068832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.068868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.078788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.078891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.078917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.078931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.078943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.078972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:22.860 [2024-11-05 12:51:52.088769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.860 [2024-11-05 12:51:52.088851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.860 [2024-11-05 12:51:52.088887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.860 [2024-11-05 12:51:52.088902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.860 [2024-11-05 12:51:52.088914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:22.860 [2024-11-05 12:51:52.088944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.860 qpair failed and we were unable to recover it. 00:37:23.118 [2024-11-05 12:51:52.098804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.098905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.098931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.098945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.098957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.098986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.108942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.109027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.109053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.109068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.109080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.109113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.118877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.118968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.118993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.119006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.119018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.119047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.128899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.128986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.129012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.129027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.129040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.129069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.138915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.138997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.139022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.139035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.139048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.139076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.148917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.149007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.149032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.149047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.149059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.149088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.158968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.159057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.159082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.159095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.159107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.159136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.169001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.169084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.169109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.169122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.169134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.169162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.179011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.179091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.179116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.179130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.179142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.179170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.189039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.189140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.189170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.189186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.189199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.189228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.199104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.199228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.199259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.199274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.199287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.199315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.209085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.209177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.209201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.209215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.209226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.209255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.219115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.119 [2024-11-05 12:51:52.219214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.119 [2024-11-05 12:51:52.219239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.119 [2024-11-05 12:51:52.219252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.119 [2024-11-05 12:51:52.219265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.119 [2024-11-05 12:51:52.219293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.119 qpair failed and we were unable to recover it. 00:37:23.119 [2024-11-05 12:51:52.229199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.229287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.229313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.229327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.229340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.120 [2024-11-05 12:51:52.229368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.239185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.239275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.239298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.239311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.239324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.120 [2024-11-05 12:51:52.239357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.249250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.249335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.249359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.249373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.249384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.120 [2024-11-05 12:51:52.249413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.259250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.259369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.259395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.259410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.259422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.120 [2024-11-05 12:51:52.259450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.269242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.269347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.269373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.269388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.269400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.120 [2024-11-05 12:51:52.269428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.279390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.279490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.279516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.279530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.279542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:23.120 [2024-11-05 12:51:52.279570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.289367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.289462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.289495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.289511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.289523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.289555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.299405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.299504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.299531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.299546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.299558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.299587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.309410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.309499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.309526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.309541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.309553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.309582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.319426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.319544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.319571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.319585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.319597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.319626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.329413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.329498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.329533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.329548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.329560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.329590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.339485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.339607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.339632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.339647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.339659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.339689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.120 [2024-11-05 12:51:52.349507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.120 [2024-11-05 12:51:52.349599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.120 [2024-11-05 12:51:52.349628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.120 [2024-11-05 12:51:52.349645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.120 [2024-11-05 12:51:52.349657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.120 [2024-11-05 12:51:52.349687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.120 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.359508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.359599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.359625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.359639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.359652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.359682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.369573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.369688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.369715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.369729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.369750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.369780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.379569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.379656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.379681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.379695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.379707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.379736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.389603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.389688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.389717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.389732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.389744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.389774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.399665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.399769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.399794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.399809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.399821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.399850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.409659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.409742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.409767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.409780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.409793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.409822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.419665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.419745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.419769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.419783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.419795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.419824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.429725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.429804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.429830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.429844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.429856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.429894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.439795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.379 [2024-11-05 12:51:52.439893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.379 [2024-11-05 12:51:52.439918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.379 [2024-11-05 12:51:52.439932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.379 [2024-11-05 12:51:52.439944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.379 [2024-11-05 12:51:52.439973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.379 qpair failed and we were unable to recover it. 00:37:23.379 [2024-11-05 12:51:52.449767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.449856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.449892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.449907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.449920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.449951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.459785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.459871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.459902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.459918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.459930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.459960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.469818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.469915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.469942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.469957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.469969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.469998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.479945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.480056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.480082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.480096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.480109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.480138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.489935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.490070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.490096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.490110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.490122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.490152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.499919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.500001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.500025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.500038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.500056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.500087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.509980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.510084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.510110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.510123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.510136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.510165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.519997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.520086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.520112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.520126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.520138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.520167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.530046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.530135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.530161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.530175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.530187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.530216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.540014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.540144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.540169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.540183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.540195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.540225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.550095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.550220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.550246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.550260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.550272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.550301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.380 qpair failed and we were unable to recover it. 00:37:23.380 [2024-11-05 12:51:52.560239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.380 [2024-11-05 12:51:52.560358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.380 [2024-11-05 12:51:52.560383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.380 [2024-11-05 12:51:52.560398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.380 [2024-11-05 12:51:52.560410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.380 [2024-11-05 12:51:52.560439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.381 qpair failed and we were unable to recover it. 00:37:23.381 [2024-11-05 12:51:52.570092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.381 [2024-11-05 12:51:52.570179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.381 [2024-11-05 12:51:52.570203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.381 [2024-11-05 12:51:52.570217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.381 [2024-11-05 12:51:52.570230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.381 [2024-11-05 12:51:52.570259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.381 qpair failed and we were unable to recover it. 00:37:23.381 [2024-11-05 12:51:52.580131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.381 [2024-11-05 12:51:52.580265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.381 [2024-11-05 12:51:52.580291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.381 [2024-11-05 12:51:52.580305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.381 [2024-11-05 12:51:52.580317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.381 [2024-11-05 12:51:52.580346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.381 qpair failed and we were unable to recover it. 00:37:23.381 [2024-11-05 12:51:52.590155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.381 [2024-11-05 12:51:52.590240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.381 [2024-11-05 12:51:52.590271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.381 [2024-11-05 12:51:52.590285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.381 [2024-11-05 12:51:52.590297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.381 [2024-11-05 12:51:52.590327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.381 qpair failed and we were unable to recover it. 00:37:23.381 [2024-11-05 12:51:52.600219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.381 [2024-11-05 12:51:52.600305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.381 [2024-11-05 12:51:52.600329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.381 [2024-11-05 12:51:52.600343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.381 [2024-11-05 12:51:52.600355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.381 [2024-11-05 12:51:52.600385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.381 qpair failed and we were unable to recover it. 00:37:23.381 [2024-11-05 12:51:52.610226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.381 [2024-11-05 12:51:52.610338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.381 [2024-11-05 12:51:52.610364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.381 [2024-11-05 12:51:52.610378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.381 [2024-11-05 12:51:52.610390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.381 [2024-11-05 12:51:52.610419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.381 qpair failed and we were unable to recover it. 00:37:23.639 [2024-11-05 12:51:52.620251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.639 [2024-11-05 12:51:52.620335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.639 [2024-11-05 12:51:52.620359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.639 [2024-11-05 12:51:52.620373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.639 [2024-11-05 12:51:52.620385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.639 [2024-11-05 12:51:52.620414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.639 qpair failed and we were unable to recover it. 00:37:23.639 [2024-11-05 12:51:52.630335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.639 [2024-11-05 12:51:52.630445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.639 [2024-11-05 12:51:52.630471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.639 [2024-11-05 12:51:52.630491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.639 [2024-11-05 12:51:52.630504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.630533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.640323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.640409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.640434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.640448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.640460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.640501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.650370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.650458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.650483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.650496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.650508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.650537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.660434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.660524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.660552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.660566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.660578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.660608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.670509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.670604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.670631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.670645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.670657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.670692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.680474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.680571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.680597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.680611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.680623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.680652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.690493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.690575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.690601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.690615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.690627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.690656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.700513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.700596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.700620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.700634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.700647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.700676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.710567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.710653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.710681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.710695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.710707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.710742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.720557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.720657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.720683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.720698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.720709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.720739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.730579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.730677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.730704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.730719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.730731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.730762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.740610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.740699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.740726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.740740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.740752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.740781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.750611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.640 [2024-11-05 12:51:52.750690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.640 [2024-11-05 12:51:52.750716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.640 [2024-11-05 12:51:52.750731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.640 [2024-11-05 12:51:52.750743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.640 [2024-11-05 12:51:52.750772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.640 qpair failed and we were unable to recover it. 00:37:23.640 [2024-11-05 12:51:52.760698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.760806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.760832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.760852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.760875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.760905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.770742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.770831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.770855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.770879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.770891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.770921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.780708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.780830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.780855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.780877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.780890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.780920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.790728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.790812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.790838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.790853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.790873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.790903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.800843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.800994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.801019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.801033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.801045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.801080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.810809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.810902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.810932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.810946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.810958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.810988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.820822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.820919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.820945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.820960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.820972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.821001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.830869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.830948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.830974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.830988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.831000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.831030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.840944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.841041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.841066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.841081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.841093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.841122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.850980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.851071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.851096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.851110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.851123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.851152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.860949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.861079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.861105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.861120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.861132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.861161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.641 [2024-11-05 12:51:52.870983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.641 [2024-11-05 12:51:52.871065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.641 [2024-11-05 12:51:52.871091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.641 [2024-11-05 12:51:52.871105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.641 [2024-11-05 12:51:52.871117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.641 [2024-11-05 12:51:52.871146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.641 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.881087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.881206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.881231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.881245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.881257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.881286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.891067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.891157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.891189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.891204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.891216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.891245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.901096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.901198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.901224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.901237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.901250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.901279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.911081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.911163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.911188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.911203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.911214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.911255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.921130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.921224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.921250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.921264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.921277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.921305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.931209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.931291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.931316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.931329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.931347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.931377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.941263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.941352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.941380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.941394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.941406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.941448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.951209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.951290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.951314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.951328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.951348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.951376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.961243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.900 [2024-11-05 12:51:52.961334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.900 [2024-11-05 12:51:52.961359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.900 [2024-11-05 12:51:52.961373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.900 [2024-11-05 12:51:52.961385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.900 [2024-11-05 12:51:52.961426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.900 qpair failed and we were unable to recover it. 00:37:23.900 [2024-11-05 12:51:52.971263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:52.971347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:52.971372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:52.971386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:52.971398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:52.971428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:52.981330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:52.981436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:52.981463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:52.981477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:52.981489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:52.981518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:52.991326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:52.991410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:52.991436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:52.991451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:52.991463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:52.991491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.001389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.001480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.001516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.001530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.001542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.001571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.011375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.011457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.011483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.011497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.011509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.011538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.021493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.021590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.021621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.021636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.021648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.021677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.031425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.031504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.031530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.031545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.031557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.031598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.041473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.041578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.041603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.041617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.041629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.041659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.051529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.051612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.051636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.051651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.051663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.051693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.061511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.061599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.061628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.061642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.061662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.061693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.071635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.071719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.071745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.071759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.071771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.071800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.081581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.081672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.081701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.081715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.081727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.081756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.091589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.091708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.901 [2024-11-05 12:51:53.091733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.901 [2024-11-05 12:51:53.091748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.901 [2024-11-05 12:51:53.091759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.901 [2024-11-05 12:51:53.091788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.901 qpair failed and we were unable to recover it. 00:37:23.901 [2024-11-05 12:51:53.101684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.901 [2024-11-05 12:51:53.101777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.902 [2024-11-05 12:51:53.101802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.902 [2024-11-05 12:51:53.101817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.902 [2024-11-05 12:51:53.101829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.902 [2024-11-05 12:51:53.101865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.902 qpair failed and we were unable to recover it. 00:37:23.902 [2024-11-05 12:51:53.111674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.902 [2024-11-05 12:51:53.111762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.902 [2024-11-05 12:51:53.111788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.902 [2024-11-05 12:51:53.111802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.902 [2024-11-05 12:51:53.111814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.902 [2024-11-05 12:51:53.111843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.902 qpair failed and we were unable to recover it. 00:37:23.902 [2024-11-05 12:51:53.121710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.902 [2024-11-05 12:51:53.121807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.902 [2024-11-05 12:51:53.121831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.902 [2024-11-05 12:51:53.121844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.902 [2024-11-05 12:51:53.121856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.902 [2024-11-05 12:51:53.121894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.902 qpair failed and we were unable to recover it. 00:37:23.902 [2024-11-05 12:51:53.131764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.902 [2024-11-05 12:51:53.131876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.902 [2024-11-05 12:51:53.131902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.902 [2024-11-05 12:51:53.131917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.902 [2024-11-05 12:51:53.131929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:23.902 [2024-11-05 12:51:53.131958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.902 qpair failed and we were unable to recover it. 00:37:24.160 [2024-11-05 12:51:53.141783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.160 [2024-11-05 12:51:53.141878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.160 [2024-11-05 12:51:53.141902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.160 [2024-11-05 12:51:53.141916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.160 [2024-11-05 12:51:53.141928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.160 [2024-11-05 12:51:53.141957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.160 qpair failed and we were unable to recover it. 00:37:24.160 [2024-11-05 12:51:53.151805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.160 [2024-11-05 12:51:53.151893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.160 [2024-11-05 12:51:53.151927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.160 [2024-11-05 12:51:53.151942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.160 [2024-11-05 12:51:53.151955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.160 [2024-11-05 12:51:53.151984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.160 qpair failed and we were unable to recover it. 00:37:24.160 [2024-11-05 12:51:53.161812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.160 [2024-11-05 12:51:53.161939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.161965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.161979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.161991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.162020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.171847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.171951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.171980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.171996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.172008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.172039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.181915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.182055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.182082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.182096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.182109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.182138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.191912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.192006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.192035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.192055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.192068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.192098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.201924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.202014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.202038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.202052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.202065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.202095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.211967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.212059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.212084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.212098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.212111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.212141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.221955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.222036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.222060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.222074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.222087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.222116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.232000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.232127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.232152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.232167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.232179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.232214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.242041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.242134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.242159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.242173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.242185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.242215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.252113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.252220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.252245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.252260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.252272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.252302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.262141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.262228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.262257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.262275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.262289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.262331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.272091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.272171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.272198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.272213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.272225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.272254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.282168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.282280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.282306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.161 [2024-11-05 12:51:53.282321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.161 [2024-11-05 12:51:53.282333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.161 [2024-11-05 12:51:53.282363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.161 qpair failed and we were unable to recover it. 00:37:24.161 [2024-11-05 12:51:53.292233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.161 [2024-11-05 12:51:53.292317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.161 [2024-11-05 12:51:53.292341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.292355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.292368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.292398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.302251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.302339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.302363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.302377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.302389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.302418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.312264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.312393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.312419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.312432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.312444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.312473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.322261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.322349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.322373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.322392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.322405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.322434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.332281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.332362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.332386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.332400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.332412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.332441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.342383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.342475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.342500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.342514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.342526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.342556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.352333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.352446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.352472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.352486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.352498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.352527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.362406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.362498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.362522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.362536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.362547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.362582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.372424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.372525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.372555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.372572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.372584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.372614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.382444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.382528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.382555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.382569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.382581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.382610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.162 [2024-11-05 12:51:53.392446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.162 [2024-11-05 12:51:53.392547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.162 [2024-11-05 12:51:53.392573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.162 [2024-11-05 12:51:53.392587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.162 [2024-11-05 12:51:53.392599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.162 [2024-11-05 12:51:53.392628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.162 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.402490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.402602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.402628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.402642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.402654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.402684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.412542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.412672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.412698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.412712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.412724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.412753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.422558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.422684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.422709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.422723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.422735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.422764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.432579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.432708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.432733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.432746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.432758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.432788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.442630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.442722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.442751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.442765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.442776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.442806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.452621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.452705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.452734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.452749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.452761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.452789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.462658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.462739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.462765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.462779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.462792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.462822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.472684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.472801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.472831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.472848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.472870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.472905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.482794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.482896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.421 [2024-11-05 12:51:53.482923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.421 [2024-11-05 12:51:53.482938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.421 [2024-11-05 12:51:53.482950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.421 [2024-11-05 12:51:53.482979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.421 qpair failed and we were unable to recover it. 00:37:24.421 [2024-11-05 12:51:53.492762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.421 [2024-11-05 12:51:53.492870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.492908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.492923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.492942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.492973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.502762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.502848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.502881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.502896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.502909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.502938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.512784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.512886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.512913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.512927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.512939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.512968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.522919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.523019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.523044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.523058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.523070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.523099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.532852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.532953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.532979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.532992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.533004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.533033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.542854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.542952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.542977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.542991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.543003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.543033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.552892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.552984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.553010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.553025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.553037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.553066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.562952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.563060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.563085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.563099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.563112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.563141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.573013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.573099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.573124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.573138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.573149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.573178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.582992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.583078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.583109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.583123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.583135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.583164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.593081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.593164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.593190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.593204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.593215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.593244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.603069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.603159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.603183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.603197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.603209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.603237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.613103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.613187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.613211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.613225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.422 [2024-11-05 12:51:53.613237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.422 [2024-11-05 12:51:53.613279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.422 qpair failed and we were unable to recover it. 00:37:24.422 [2024-11-05 12:51:53.623193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.422 [2024-11-05 12:51:53.623292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.422 [2024-11-05 12:51:53.623318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.422 [2024-11-05 12:51:53.623332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.423 [2024-11-05 12:51:53.623350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.423 [2024-11-05 12:51:53.623380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.423 qpair failed and we were unable to recover it. 00:37:24.423 [2024-11-05 12:51:53.633190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.423 [2024-11-05 12:51:53.633270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.423 [2024-11-05 12:51:53.633296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.423 [2024-11-05 12:51:53.633310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.423 [2024-11-05 12:51:53.633322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.423 [2024-11-05 12:51:53.633351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.423 qpair failed and we were unable to recover it. 00:37:24.423 [2024-11-05 12:51:53.643195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.423 [2024-11-05 12:51:53.643288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.423 [2024-11-05 12:51:53.643313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.423 [2024-11-05 12:51:53.643327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.423 [2024-11-05 12:51:53.643338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.423 [2024-11-05 12:51:53.643368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.423 qpair failed and we were unable to recover it. 00:37:24.423 [2024-11-05 12:51:53.653226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.423 [2024-11-05 12:51:53.653343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.423 [2024-11-05 12:51:53.653368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.423 [2024-11-05 12:51:53.653382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.423 [2024-11-05 12:51:53.653395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.423 [2024-11-05 12:51:53.653423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.423 qpair failed and we were unable to recover it. 00:37:24.681 [2024-11-05 12:51:53.663307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.681 [2024-11-05 12:51:53.663429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.681 [2024-11-05 12:51:53.663455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.681 [2024-11-05 12:51:53.663469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.681 [2024-11-05 12:51:53.663481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.681 [2024-11-05 12:51:53.663510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.681 qpair failed and we were unable to recover it. 00:37:24.681 [2024-11-05 12:51:53.673264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.681 [2024-11-05 12:51:53.673347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.681 [2024-11-05 12:51:53.673373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.681 [2024-11-05 12:51:53.673387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.681 [2024-11-05 12:51:53.673399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.681 [2024-11-05 12:51:53.673428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.681 qpair failed and we were unable to recover it. 00:37:24.681 [2024-11-05 12:51:53.683292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.681 [2024-11-05 12:51:53.683379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.681 [2024-11-05 12:51:53.683403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.681 [2024-11-05 12:51:53.683417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.681 [2024-11-05 12:51:53.683429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.681 [2024-11-05 12:51:53.683458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.681 qpair failed and we were unable to recover it. 00:37:24.681 [2024-11-05 12:51:53.693303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.681 [2024-11-05 12:51:53.693405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.681 [2024-11-05 12:51:53.693431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.681 [2024-11-05 12:51:53.693445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.681 [2024-11-05 12:51:53.693457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.681 [2024-11-05 12:51:53.693487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.681 qpair failed and we were unable to recover it. 00:37:24.681 [2024-11-05 12:51:53.703358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.681 [2024-11-05 12:51:53.703446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.681 [2024-11-05 12:51:53.703470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.681 [2024-11-05 12:51:53.703484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.681 [2024-11-05 12:51:53.703497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.681 [2024-11-05 12:51:53.703527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.681 qpair failed and we were unable to recover it. 00:37:24.681 [2024-11-05 12:51:53.713361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.681 [2024-11-05 12:51:53.713453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.681 [2024-11-05 12:51:53.713479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.713494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.713506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.713536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.723397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.723486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.723512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.723526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.723538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.723567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.733422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.733504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.733529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.733544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.733556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.733585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.743486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.743573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.743598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.743612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.743624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.743665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.753509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.753591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.753617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.753637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.753650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.753680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.763561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.763655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.763679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.763692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.763705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.763735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.773560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.773649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.773675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.773689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.773701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.773730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.783586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.783678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.783705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.783721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.783741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.783772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.793625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.793711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.793738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.793752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.793764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.793800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.803636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.803740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.803767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.803781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.803794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.803822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.813670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.813782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.813807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.813821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.813833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.813871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.823701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.823787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.823818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.823833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.823845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.823881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.833689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.833771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.833796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.833810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.682 [2024-11-05 12:51:53.833823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.682 [2024-11-05 12:51:53.833852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.682 qpair failed and we were unable to recover it. 00:37:24.682 [2024-11-05 12:51:53.843750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.682 [2024-11-05 12:51:53.843846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.682 [2024-11-05 12:51:53.843878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.682 [2024-11-05 12:51:53.843893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.843905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.843935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.853754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.853839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.853870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.853885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.853897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.853927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.863793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.863882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.863907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.863921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.863933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.863962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.873826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.873937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.873964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.873978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.873990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.874032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.883876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.883967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.883991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.884010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.884022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.884052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.893909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.893990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.894014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.894028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.894040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.894069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.903924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.904008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.904032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.904046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.904058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.904087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.683 [2024-11-05 12:51:53.914058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.683 [2024-11-05 12:51:53.914157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.683 [2024-11-05 12:51:53.914183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.683 [2024-11-05 12:51:53.914197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.683 [2024-11-05 12:51:53.914209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.683 [2024-11-05 12:51:53.914240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.683 qpair failed and we were unable to recover it. 00:37:24.941 [2024-11-05 12:51:53.923998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.941 [2024-11-05 12:51:53.924121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.941 [2024-11-05 12:51:53.924147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.941 [2024-11-05 12:51:53.924161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.941 [2024-11-05 12:51:53.924173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.941 [2024-11-05 12:51:53.924208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.941 qpair failed and we were unable to recover it. 00:37:24.941 [2024-11-05 12:51:53.934022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.941 [2024-11-05 12:51:53.934114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.941 [2024-11-05 12:51:53.934143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.941 [2024-11-05 12:51:53.934158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.941 [2024-11-05 12:51:53.934170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.941 [2024-11-05 12:51:53.934199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.941 qpair failed and we were unable to recover it. 00:37:24.941 [2024-11-05 12:51:53.944017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.941 [2024-11-05 12:51:53.944101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.941 [2024-11-05 12:51:53.944127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.941 [2024-11-05 12:51:53.944142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.941 [2024-11-05 12:51:53.944154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.941 [2024-11-05 12:51:53.944184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.941 qpair failed and we were unable to recover it. 00:37:24.941 [2024-11-05 12:51:53.954038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.941 [2024-11-05 12:51:53.954128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.941 [2024-11-05 12:51:53.954156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.941 [2024-11-05 12:51:53.954170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.941 [2024-11-05 12:51:53.954183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.941 [2024-11-05 12:51:53.954212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.941 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:53.964122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:53.964214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:53.964240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:53.964254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:53.964269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:53.964299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:53.974123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:53.974205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:53.974229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:53.974243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:53.974256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:53.974285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:53.984209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:53.984295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:53.984320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:53.984333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:53.984345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:53.984375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:53.994170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:53.994296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:53.994322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:53.994336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:53.994348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:53.994377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.004208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.004324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.004350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.004364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.004376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.004405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.014273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.014383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.014413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.014428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.014440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.014470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.024303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.024390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.024414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.024427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.024439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.024468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.034350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.034433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.034459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.034473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.034485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.034514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.044330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.044422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.044446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.044459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.044471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.044500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.054336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.054428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.054453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.054467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.054485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.054515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.064394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.064511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.064536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.064550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.064562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.064591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.074438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.074574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.074599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.074613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.074626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.074655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.084459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.942 [2024-11-05 12:51:54.084546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.942 [2024-11-05 12:51:54.084569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.942 [2024-11-05 12:51:54.084582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.942 [2024-11-05 12:51:54.084594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.942 [2024-11-05 12:51:54.084623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.942 qpair failed and we were unable to recover it. 00:37:24.942 [2024-11-05 12:51:54.094443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.094531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.094554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.094567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.094579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.094608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.104498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.104580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.104605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.104618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.104630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.104659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.114524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.114607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.114633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.114647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.114659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.114688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.124560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.124693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.124717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.124731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.124743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.124772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.134565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.134649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.134674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.134687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.134699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.134728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.144583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.144666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.144695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.144710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.144722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.144750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.154640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.154720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.154746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.154760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.154772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.154802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.164675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.164764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.164789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.164803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.164815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.164844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:24.943 [2024-11-05 12:51:54.174682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:24.943 [2024-11-05 12:51:54.174767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:24.943 [2024-11-05 12:51:54.174792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:24.943 [2024-11-05 12:51:54.174805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.943 [2024-11-05 12:51:54.174817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:24.943 [2024-11-05 12:51:54.174846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:24.943 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.184722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.184808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.184837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.184851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.184878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.184909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.194762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.194847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.194884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.194900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.194912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.194943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.204831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.204936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.204961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.204974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.204987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.205017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.214845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.214943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.214968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.214982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.214994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.215024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.224840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.224936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.224963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.224978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.224990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.225019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.234910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.234994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.235020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.235034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.235046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.235075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.244928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.245033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.245062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.245077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.245089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.245118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.254937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.255023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.255047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.255061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.255072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.255101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.264974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.265058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.265082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.265096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.265108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.265138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.275015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.275104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.275130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.275144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.275156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.275185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.285073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.285165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.285194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.285208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.285220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.285249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.295051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.295177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.202 [2024-11-05 12:51:54.295203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.202 [2024-11-05 12:51:54.295217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.202 [2024-11-05 12:51:54.295229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.202 [2024-11-05 12:51:54.295257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.202 qpair failed and we were unable to recover it. 00:37:25.202 [2024-11-05 12:51:54.305085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.202 [2024-11-05 12:51:54.305176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.305202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.305216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.305228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.305257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.315161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.315276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.315304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.315325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.315337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.315379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.325147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.325237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.325261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.325275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.325287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.325317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.335176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.335257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.335282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.335296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.335308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.335338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.345191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.345285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.345311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.345326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.345337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.345379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.355248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.355336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.355362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.355376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.355388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.355423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.365238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.365374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.365399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.365413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.365425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.365455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.375361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.375447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.375471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.375485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.375496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.375526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.385317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.385403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.385427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.385441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.385453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.385481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.395310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.395412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.395438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.395452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.395463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.395492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.405377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.405473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.405497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.405511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.405523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.405552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.415420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.415540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.415566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.415580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.415592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.415622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.425440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.425564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.425590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.203 [2024-11-05 12:51:54.425604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.203 [2024-11-05 12:51:54.425616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.203 [2024-11-05 12:51:54.425646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.203 qpair failed and we were unable to recover it. 00:37:25.203 [2024-11-05 12:51:54.435479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.203 [2024-11-05 12:51:54.435562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.203 [2024-11-05 12:51:54.435588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.204 [2024-11-05 12:51:54.435602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.204 [2024-11-05 12:51:54.435614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.204 [2024-11-05 12:51:54.435643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.204 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.445528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.445622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.445652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.445667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.445679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.445709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.455511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.455595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.455619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.455632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.455644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.455674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.465527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.465611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.465637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.465651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.465663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.465693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.475547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.475634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.475660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.475674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.475686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.475717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.485602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.485726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.485752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.485766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.485778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.485816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.495666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.495777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.495805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.495820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.495832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.495869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.505642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.505724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.505749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.505762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.462 [2024-11-05 12:51:54.505775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.462 [2024-11-05 12:51:54.505816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.462 qpair failed and we were unable to recover it. 00:37:25.462 [2024-11-05 12:51:54.515698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.462 [2024-11-05 12:51:54.515784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.462 [2024-11-05 12:51:54.515812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.462 [2024-11-05 12:51:54.515826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.515839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.515877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.525759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.525870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.525897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.525911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.525924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.525954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.535714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.535799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.535823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.535837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.535850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.535891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.545768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.545850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.545881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.545896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.545908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.545938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.555778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.555864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.555891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.555906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.555918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.555948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.565821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.565918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.565944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.565958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.565971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.566000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.575873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.575959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.575990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.576005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.576017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.576047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.585887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.585970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.585996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.586010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.586023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.586052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.595896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.595975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.596000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.596015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.596028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.596057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.605954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.606043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.606067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.606081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.606093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.606122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.615965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.616044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.616069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.616082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.616100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.616130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.626007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.626088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.626113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.626127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.626139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.626168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.636005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.636089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.636118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.636133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.636145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.463 [2024-11-05 12:51:54.636175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.463 qpair failed and we were unable to recover it. 00:37:25.463 [2024-11-05 12:51:54.646145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.463 [2024-11-05 12:51:54.646276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.463 [2024-11-05 12:51:54.646302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.463 [2024-11-05 12:51:54.646317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.463 [2024-11-05 12:51:54.646329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.464 [2024-11-05 12:51:54.646358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.464 qpair failed and we were unable to recover it. 00:37:25.464 [2024-11-05 12:51:54.656096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.464 [2024-11-05 12:51:54.656186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.464 [2024-11-05 12:51:54.656211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.464 [2024-11-05 12:51:54.656226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.464 [2024-11-05 12:51:54.656238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.464 [2024-11-05 12:51:54.656267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.464 qpair failed and we were unable to recover it. 00:37:25.464 [2024-11-05 12:51:54.666096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.464 [2024-11-05 12:51:54.666187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.464 [2024-11-05 12:51:54.666216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.464 [2024-11-05 12:51:54.666231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.464 [2024-11-05 12:51:54.666243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.464 [2024-11-05 12:51:54.666272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.464 qpair failed and we were unable to recover it. 00:37:25.464 [2024-11-05 12:51:54.676137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.464 [2024-11-05 12:51:54.676226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.464 [2024-11-05 12:51:54.676252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.464 [2024-11-05 12:51:54.676267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.464 [2024-11-05 12:51:54.676279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.464 [2024-11-05 12:51:54.676309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.464 qpair failed and we were unable to recover it. 00:37:25.464 [2024-11-05 12:51:54.686271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.464 [2024-11-05 12:51:54.686406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.464 [2024-11-05 12:51:54.686432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.464 [2024-11-05 12:51:54.686447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.464 [2024-11-05 12:51:54.686459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.464 [2024-11-05 12:51:54.686488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.464 qpair failed and we were unable to recover it. 00:37:25.464 [2024-11-05 12:51:54.696207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.464 [2024-11-05 12:51:54.696295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.464 [2024-11-05 12:51:54.696326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.464 [2024-11-05 12:51:54.696342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.464 [2024-11-05 12:51:54.696354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.464 [2024-11-05 12:51:54.696384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.464 qpair failed and we were unable to recover it. 00:37:25.722 [2024-11-05 12:51:54.706202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.706300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.706332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.706347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.706359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.706390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.716255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.716341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.716369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.716383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.716395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.716426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.726272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.726368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.726393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.726406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.726418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.726447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.736412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.736505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.736532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.736547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.736559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.736588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.746332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.746417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.746441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.746460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.746473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.746502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.756396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.756476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.756505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.756520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.756532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.756562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.766418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.766505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.766529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.766543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.766556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.766585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.776461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.776546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.776570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.776584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.776597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.776626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.786484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.786606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.786632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.786646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.786658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.786688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.796516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.796606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.796631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.796646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.796658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.796687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.806524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.806623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.806652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.806667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.806680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.806710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.816547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.816628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.816653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.816668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.816680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.816709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.826558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.826658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.723 [2024-11-05 12:51:54.826683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.723 [2024-11-05 12:51:54.826698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.723 [2024-11-05 12:51:54.826711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.723 [2024-11-05 12:51:54.826740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.723 qpair failed and we were unable to recover it. 00:37:25.723 [2024-11-05 12:51:54.836682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.723 [2024-11-05 12:51:54.836777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.836804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.836818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.836830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.836868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.846646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.846753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.846780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.846794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.846806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.846835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.856660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.856743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.856766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.856780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.856792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.856822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.866665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.866743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.866768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.866782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.866794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.866835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.876765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.876847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.876880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.876900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.876914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.876944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.886733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.886823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.886847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.886871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.886885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.886916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.896917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.897051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.897077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.897091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.897104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.897134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.906830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.906923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.906953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.906969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.906982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.907013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.916880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.916966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.916992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.917006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.917018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.917054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.926901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.926992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.927017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.927032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.927044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.927073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.936867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.936951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.936977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.936992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.937003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.937033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.946940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.947045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.947071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.947086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.947097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.947127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.724 [2024-11-05 12:51:54.956914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.724 [2024-11-05 12:51:54.957030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.724 [2024-11-05 12:51:54.957056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.724 [2024-11-05 12:51:54.957070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.724 [2024-11-05 12:51:54.957083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.724 [2024-11-05 12:51:54.957113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.724 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:54.966962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:54.967058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:54.967083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:54.967097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:54.967110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:54.967140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:54.976998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:54.977124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:54.977151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:54.977165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:54.977177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:54.977206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:54.987031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:54.987111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:54.987136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:54.987150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:54.987163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:54.987194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:54.997056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:54.997185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:54.997212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:54.997226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:54.997238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:54.997268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:55.007083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:55.007172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:55.007202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:55.007217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:55.007229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:55.007258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:55.017080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:55.017165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:55.017189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:55.017203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:55.017215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:55.017244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:55.027094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:55.027188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:55.027214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:55.027228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:55.027240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:55.027270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:55.037123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:55.037213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:55.037239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:55.037253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.983 [2024-11-05 12:51:55.037265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.983 [2024-11-05 12:51:55.037294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.983 qpair failed and we were unable to recover it. 00:37:25.983 [2024-11-05 12:51:55.047225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.983 [2024-11-05 12:51:55.047325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.983 [2024-11-05 12:51:55.047351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.983 [2024-11-05 12:51:55.047365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.047377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.047412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.057198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.057286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.057310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.057324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.057336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.057365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.067220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.067305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.067330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.067344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.067355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.067384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.077232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.077316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.077342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.077356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.077368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.077397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.087277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.087367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.087396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.087411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.087423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.087465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.097280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.097364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.097388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.097402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.097414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.097444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.107471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.107594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.107620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.107634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.107646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.107675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.117364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.117444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.117469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.117483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.117495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.117525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.127411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.127523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.127546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.127560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.127572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.127601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.137449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.137541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.137576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.137593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.137605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.137635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.147454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.147539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.147564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.147578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.147591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.147620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.157508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.157597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.157623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.157637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.157649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.157678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.167507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.167597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.167622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.167636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.167648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.984 [2024-11-05 12:51:55.167678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.984 qpair failed and we were unable to recover it. 00:37:25.984 [2024-11-05 12:51:55.177513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.984 [2024-11-05 12:51:55.177604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.984 [2024-11-05 12:51:55.177632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.984 [2024-11-05 12:51:55.177647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.984 [2024-11-05 12:51:55.177664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.985 [2024-11-05 12:51:55.177694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.985 qpair failed and we were unable to recover it. 00:37:25.985 [2024-11-05 12:51:55.187590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.985 [2024-11-05 12:51:55.187671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.985 [2024-11-05 12:51:55.187695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.985 [2024-11-05 12:51:55.187709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.985 [2024-11-05 12:51:55.187721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.985 [2024-11-05 12:51:55.187749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.985 qpair failed and we were unable to recover it. 00:37:25.985 [2024-11-05 12:51:55.197593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.985 [2024-11-05 12:51:55.197685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.985 [2024-11-05 12:51:55.197711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.985 [2024-11-05 12:51:55.197725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.985 [2024-11-05 12:51:55.197737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.985 [2024-11-05 12:51:55.197766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.985 qpair failed and we were unable to recover it. 00:37:25.985 [2024-11-05 12:51:55.207617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.985 [2024-11-05 12:51:55.207704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.985 [2024-11-05 12:51:55.207728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.985 [2024-11-05 12:51:55.207742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.985 [2024-11-05 12:51:55.207754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.985 [2024-11-05 12:51:55.207789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.985 qpair failed and we were unable to recover it. 00:37:25.985 [2024-11-05 12:51:55.217669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:25.985 [2024-11-05 12:51:55.217759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:25.985 [2024-11-05 12:51:55.217786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:25.985 [2024-11-05 12:51:55.217800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:25.985 [2024-11-05 12:51:55.217812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:25.985 [2024-11-05 12:51:55.217841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:25.985 qpair failed and we were unable to recover it. 00:37:26.243 [2024-11-05 12:51:55.227690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.243 [2024-11-05 12:51:55.227774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.243 [2024-11-05 12:51:55.227800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.243 [2024-11-05 12:51:55.227815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.243 [2024-11-05 12:51:55.227827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.243 [2024-11-05 12:51:55.227856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.243 qpair failed and we were unable to recover it. 00:37:26.243 [2024-11-05 12:51:55.237681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.243 [2024-11-05 12:51:55.237777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.243 [2024-11-05 12:51:55.237803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.243 [2024-11-05 12:51:55.237817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.243 [2024-11-05 12:51:55.237829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.243 [2024-11-05 12:51:55.237865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.243 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.247733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.247825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.247849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.247870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.247883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.247913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.257782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.257877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.257902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.257916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.257928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.257958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.267778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.267866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.267897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.267911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.267923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.267965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.277815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.277903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.277929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.277943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.277955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.277984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.287842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.287958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.287984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.287998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.288010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.288040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.297888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.298002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.298027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.298041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.298053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.298082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.307894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.307980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.308005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.308024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.308038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.308068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.318007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.318091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.318117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.318131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.318143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.318172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.327976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.328100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.328125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.328139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.328151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.328180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.337974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.338081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.338106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.338121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.338133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.338161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.348099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.348193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.348219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.348234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.348246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.244 [2024-11-05 12:51:55.348275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.244 qpair failed and we were unable to recover it. 00:37:26.244 [2024-11-05 12:51:55.358043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.244 [2024-11-05 12:51:55.358151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.244 [2024-11-05 12:51:55.358176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.244 [2024-11-05 12:51:55.358190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.244 [2024-11-05 12:51:55.358203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.358244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.368090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.368176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.368201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.368215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.368227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.368256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.378081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.378180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.378205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.378219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.378232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.378261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.388123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.388209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.388233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.388246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.388258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.388287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.398171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.398268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.398294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.398307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.398319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.398348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.408220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.408314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.408349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.408365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.408378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.408408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.418195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.418275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.418300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.418314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.418325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.418355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.428260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.428343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.428367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.428381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.428393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.428422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.438250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.438334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.438361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.438380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.438393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.438422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.448298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.448387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.448413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.448428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.448440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.448470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.458346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.458429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.458453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.458466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.458478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.458508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.468344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.468434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.468459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.468473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.468485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.468515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.245 [2024-11-05 12:51:55.478411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.245 [2024-11-05 12:51:55.478541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.245 [2024-11-05 12:51:55.478567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.245 [2024-11-05 12:51:55.478582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.245 [2024-11-05 12:51:55.478594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.245 [2024-11-05 12:51:55.478630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.245 qpair failed and we were unable to recover it. 00:37:26.505 [2024-11-05 12:51:55.488405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.505 [2024-11-05 12:51:55.488509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.505 [2024-11-05 12:51:55.488535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.505 [2024-11-05 12:51:55.488549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.505 [2024-11-05 12:51:55.488561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.505 [2024-11-05 12:51:55.488591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.505 qpair failed and we were unable to recover it. 00:37:26.505 [2024-11-05 12:51:55.498468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.505 [2024-11-05 12:51:55.498556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.505 [2024-11-05 12:51:55.498580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.505 [2024-11-05 12:51:55.498594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.505 [2024-11-05 12:51:55.498606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.505 [2024-11-05 12:51:55.498635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.505 qpair failed and we were unable to recover it. 00:37:26.505 [2024-11-05 12:51:55.508457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.505 [2024-11-05 12:51:55.508553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.505 [2024-11-05 12:51:55.508578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.505 [2024-11-05 12:51:55.508592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.505 [2024-11-05 12:51:55.508605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.505 [2024-11-05 12:51:55.508635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.505 qpair failed and we were unable to recover it. 00:37:26.505 [2024-11-05 12:51:55.518515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.505 [2024-11-05 12:51:55.518600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.505 [2024-11-05 12:51:55.518626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.505 [2024-11-05 12:51:55.518641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.505 [2024-11-05 12:51:55.518653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.518683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.528551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.528642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.528670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.528686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.528699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.528729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.538644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.538738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.538764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.538778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.538790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.538819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.548639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.548756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.548783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.548797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.548809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.548838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.558609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.558693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.558719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.558734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.558746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.558775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.568677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.568799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.568830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.568846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.568866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.568899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.578690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.578775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.578799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.578812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.578825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.578854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.588719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.588807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.588835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.588849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.588869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.588900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.598748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.598833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.598867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.598885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.598897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.598926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.608795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.608902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.608931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.608948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.608965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.608997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.618795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.618889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.618915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.618929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.618941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.618970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.628834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.628938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.628964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.628978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.628990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.629019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.638897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.639023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.639049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.639063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.639075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.639104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.506 [2024-11-05 12:51:55.648931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.506 [2024-11-05 12:51:55.649026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.506 [2024-11-05 12:51:55.649050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.506 [2024-11-05 12:51:55.649064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.506 [2024-11-05 12:51:55.649076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.506 [2024-11-05 12:51:55.649105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.506 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.658950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.659065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.659090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.659104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.659116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.659147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.668951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.669038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.669073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.669088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.669100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.669129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.678989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.679076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.679102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.679116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.679128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.679170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.689054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.689151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.689176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.689190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.689203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.689232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.699078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.699160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.699189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.699204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.699216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.699245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.709088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.709174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.709199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.709213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.709225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.709254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.719106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.719193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.719220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.719234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.719247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.719278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.729151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.729244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.729268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.729282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.729294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.729323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.507 [2024-11-05 12:51:55.739159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.507 [2024-11-05 12:51:55.739287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.507 [2024-11-05 12:51:55.739313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.507 [2024-11-05 12:51:55.739327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.507 [2024-11-05 12:51:55.739346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.507 [2024-11-05 12:51:55.739376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.507 qpair failed and we were unable to recover it. 00:37:26.814 [2024-11-05 12:51:55.749232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.749313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.749338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.749352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.749364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.749394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.759240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.759328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.759355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.759370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.759382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.759411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.769256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.769347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.769371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.769385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.769397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.769426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.779320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.779428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.779456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.779472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.779484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.779513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.789309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.789391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.789415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.789429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.789441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.789482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.799303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.799435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.799461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.799475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.799486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.799516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.809332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.809420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.809445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.809458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.809470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.809500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.819370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.819455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.819479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.819493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.819505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.819534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.829440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.829557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.829588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.829603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.829615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.829644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.839456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.839539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.839565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.839579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.839591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.839620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.849499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.849612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.849637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.849651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.849663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.849692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.859497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.859633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.859659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.859673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.859685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.859714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.815 qpair failed and we were unable to recover it. 00:37:26.815 [2024-11-05 12:51:55.869531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.815 [2024-11-05 12:51:55.869616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.815 [2024-11-05 12:51:55.869642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.815 [2024-11-05 12:51:55.869665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.815 [2024-11-05 12:51:55.869678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.815 [2024-11-05 12:51:55.869708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.879547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.879638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.879664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.879678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.879690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.879719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.889573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.889660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.889686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.889700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.889712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.889741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.899640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.899755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.899783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.899798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.899809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.899839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.909603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.909687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.909711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.909725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.909737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.909766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.919640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.919725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.919751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.919765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.919777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.919807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.929709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.929796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.929821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.929835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.929848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.929884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.939713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.939796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.939820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.939835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.939847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.939884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.949732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.949819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.949845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.949865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.949880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.949909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.959773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.959873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.959897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.959911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.959924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.959953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.969815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.969937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.969964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.969978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.969991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.970021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.979829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.979924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.979949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.979963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.979975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.980005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.989890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:55.989980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:55.990004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:55.990017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:55.990029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:55.990059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.816 [2024-11-05 12:51:55.999918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.816 [2024-11-05 12:51:56.000012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.816 [2024-11-05 12:51:56.000037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.816 [2024-11-05 12:51:56.000056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.816 [2024-11-05 12:51:56.000069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.816 [2024-11-05 12:51:56.000098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.816 qpair failed and we were unable to recover it. 00:37:26.817 [2024-11-05 12:51:56.009937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.817 [2024-11-05 12:51:56.010032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.817 [2024-11-05 12:51:56.010057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.817 [2024-11-05 12:51:56.010072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.817 [2024-11-05 12:51:56.010083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.817 [2024-11-05 12:51:56.010113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.817 qpair failed and we were unable to recover it. 00:37:26.817 [2024-11-05 12:51:56.019956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.817 [2024-11-05 12:51:56.020041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.817 [2024-11-05 12:51:56.020066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.817 [2024-11-05 12:51:56.020080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.817 [2024-11-05 12:51:56.020092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.817 [2024-11-05 12:51:56.020121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.817 qpair failed and we were unable to recover it. 00:37:26.817 [2024-11-05 12:51:56.029998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.817 [2024-11-05 12:51:56.030084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.817 [2024-11-05 12:51:56.030107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.817 [2024-11-05 12:51:56.030121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.817 [2024-11-05 12:51:56.030133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.817 [2024-11-05 12:51:56.030162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.817 qpair failed and we were unable to recover it. 00:37:26.817 [2024-11-05 12:51:56.040075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.817 [2024-11-05 12:51:56.040169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.817 [2024-11-05 12:51:56.040198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.817 [2024-11-05 12:51:56.040214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.817 [2024-11-05 12:51:56.040227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.817 [2024-11-05 12:51:56.040262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.817 qpair failed and we were unable to recover it. 00:37:26.817 [2024-11-05 12:51:56.050049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:26.817 [2024-11-05 12:51:56.050154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:26.817 [2024-11-05 12:51:56.050181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:26.817 [2024-11-05 12:51:56.050195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:26.817 [2024-11-05 12:51:56.050207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:26.817 [2024-11-05 12:51:56.050237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:26.817 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.060091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.060217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.060242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.060256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.060268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.060298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.070117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.070204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.070228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.070242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.070254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.070283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.080128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.080213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.080238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.080252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.080264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.080306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.090186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.090278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.090302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.090315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.090327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.090357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.100195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.100320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.100346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.100360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.100372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.100401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.110224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.110308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.110336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.110350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.110362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.110392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.120238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.120323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.120348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.120362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.120374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.120403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.130344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.130431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.130460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.130474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.130486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.130515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.140338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.140453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.140479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.140493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.140506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.140535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.150313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.150394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.150419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.150433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.150445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.150474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.160365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.160450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.160476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.160491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.160503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.160532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.170386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.170477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.170502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.170516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.076 [2024-11-05 12:51:56.170533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.076 [2024-11-05 12:51:56.170563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.076 qpair failed and we were unable to recover it. 00:37:27.076 [2024-11-05 12:51:56.180430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.076 [2024-11-05 12:51:56.180553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.076 [2024-11-05 12:51:56.180579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.076 [2024-11-05 12:51:56.180593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.180605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.180634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.190491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.190580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.190606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.190621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.190637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.190668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.200483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.200568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.200594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.200609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.200621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.200650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.210517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.210605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.210632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.210646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.210659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.210688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.220540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.220667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.220694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.220709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.220721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.220751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.230547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.230629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.230653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.230667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.230679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.230708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.240592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.240697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.240723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.240737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.240749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.240778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.250621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.250725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.250751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.250765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.250777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.250819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.260645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.260733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.260764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.260779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.260791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.260821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.270647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.270731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.270755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.270769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.270781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.270810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.280686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.280770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.280796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.280811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.280823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.280852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.290742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.290835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.290869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.290885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.290897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.290927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.300819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.300925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.300950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.300964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.077 [2024-11-05 12:51:56.300982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.077 [2024-11-05 12:51:56.301011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.077 qpair failed and we were unable to recover it. 00:37:27.077 [2024-11-05 12:51:56.310817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.077 [2024-11-05 12:51:56.310903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.077 [2024-11-05 12:51:56.310929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.077 [2024-11-05 12:51:56.310943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.078 [2024-11-05 12:51:56.310955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.078 [2024-11-05 12:51:56.310985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.078 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.320846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.320940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.320966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.320979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.320991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.321020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.330877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.330971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.330996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.331010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.331023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.331052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.340874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.340970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.340995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.341009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.341021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.341051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.350908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.351018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.351043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.351057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.351069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.351098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.360940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.361048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.361074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.361087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.361099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.361129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.371005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.371102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.371128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.371142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.371154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.371183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.381039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.381133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.381159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.336 [2024-11-05 12:51:56.381173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.336 [2024-11-05 12:51:56.381185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.336 [2024-11-05 12:51:56.381214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.336 qpair failed and we were unable to recover it. 00:37:27.336 [2024-11-05 12:51:56.391009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.336 [2024-11-05 12:51:56.391093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.336 [2024-11-05 12:51:56.391125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.391140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.391152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.391181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.401051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.401181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.401207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.401221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.401233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.401262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.411105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.411214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.411240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.411253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.411266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.411295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.421101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.421192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.421217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.421231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.421243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.421273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.431164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.431252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.431276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.431296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.431308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.431338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.441148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.441240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.441267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.441282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.441295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.441325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.451185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.451274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.451298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.451312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.451324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.451365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.461236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.461362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.461388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.461403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.461415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.461461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.471304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.471411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.471438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.471453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.471465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.471494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.481257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.481338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.481364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.481378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.481390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.481431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.491280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.491376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.491401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.491415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.491427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.491456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.501323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.501404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.501429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.501442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.501454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.501483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.511376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.511486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.511514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.511531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.511544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.337 [2024-11-05 12:51:56.511573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.337 qpair failed and we were unable to recover it. 00:37:27.337 [2024-11-05 12:51:56.521361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.337 [2024-11-05 12:51:56.521446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.337 [2024-11-05 12:51:56.521473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.337 [2024-11-05 12:51:56.521487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.337 [2024-11-05 12:51:56.521500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.338 [2024-11-05 12:51:56.521529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.338 qpair failed and we were unable to recover it. 00:37:27.338 [2024-11-05 12:51:56.531441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.338 [2024-11-05 12:51:56.531538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.338 [2024-11-05 12:51:56.531562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.338 [2024-11-05 12:51:56.531576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.338 [2024-11-05 12:51:56.531588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.338 [2024-11-05 12:51:56.531618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.338 qpair failed and we were unable to recover it. 00:37:27.338 [2024-11-05 12:51:56.541439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.338 [2024-11-05 12:51:56.541523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.338 [2024-11-05 12:51:56.541547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.338 [2024-11-05 12:51:56.541560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.338 [2024-11-05 12:51:56.541573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.338 [2024-11-05 12:51:56.541613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.338 qpair failed and we were unable to recover it. 00:37:27.338 [2024-11-05 12:51:56.551433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.338 [2024-11-05 12:51:56.551522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.338 [2024-11-05 12:51:56.551548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.338 [2024-11-05 12:51:56.551563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.338 [2024-11-05 12:51:56.551575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.338 [2024-11-05 12:51:56.551604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.338 qpair failed and we were unable to recover it. 00:37:27.338 [2024-11-05 12:51:56.561545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.338 [2024-11-05 12:51:56.561639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.338 [2024-11-05 12:51:56.561665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.338 [2024-11-05 12:51:56.561685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.338 [2024-11-05 12:51:56.561697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.338 [2024-11-05 12:51:56.561727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.338 qpair failed and we were unable to recover it. 00:37:27.338 [2024-11-05 12:51:56.571542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.338 [2024-11-05 12:51:56.571648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.338 [2024-11-05 12:51:56.571677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.338 [2024-11-05 12:51:56.571693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.338 [2024-11-05 12:51:56.571706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.338 [2024-11-05 12:51:56.571736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.338 qpair failed and we were unable to recover it. 00:37:27.596 [2024-11-05 12:51:56.581585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.596 [2024-11-05 12:51:56.581671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.596 [2024-11-05 12:51:56.581696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.596 [2024-11-05 12:51:56.581710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.596 [2024-11-05 12:51:56.581722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.596 [2024-11-05 12:51:56.581752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.596 qpair failed and we were unable to recover it. 00:37:27.596 [2024-11-05 12:51:56.591578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.596 [2024-11-05 12:51:56.591662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.596 [2024-11-05 12:51:56.591688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.596 [2024-11-05 12:51:56.591705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.596 [2024-11-05 12:51:56.591717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.596 [2024-11-05 12:51:56.591748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.596 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.601588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.601676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.601702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.601716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.601729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.597 [2024-11-05 12:51:56.601766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.611649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.611743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.611772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.611787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.611799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.597 [2024-11-05 12:51:56.611831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.621655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.621743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.621769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.621784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.621796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.597 [2024-11-05 12:51:56.621826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.631720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.631839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.631872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.631888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.631901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.597 [2024-11-05 12:51:56.631930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.641708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.641796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.641821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.641836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.641853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:27.597 [2024-11-05 12:51:56.641893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.641931] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:37:27.597 A controller has encountered a failure and is being reset. 00:37:27.597 [2024-11-05 12:51:56.651745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.651838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.651888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.651905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.651918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.651949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.661811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.661941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.661969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.661985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.661997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.662026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.671827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.671923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.671950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.671965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.671978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.672007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.681838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.681936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.681963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.681978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.681990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.682019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.691888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.691983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.692008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.692022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.692034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.692064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.701906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.701995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.702022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.702037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.702049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.702078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.711935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.712020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.712045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.712059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.712071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.712100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.597 [2024-11-05 12:51:56.721975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.597 [2024-11-05 12:51:56.722058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.597 [2024-11-05 12:51:56.722084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.597 [2024-11-05 12:51:56.722099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.597 [2024-11-05 12:51:56.722111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.597 [2024-11-05 12:51:56.722140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.597 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.732029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.732126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.732153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.732175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.732191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.732221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.742055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.742186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.742217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.742232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.742245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.742273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.752033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.752123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.752149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.752164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.752176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.752205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.762050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.762130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.762157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.762171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.762183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.762212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.772101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.772190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.772217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.772231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.772243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.772278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.782123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.782209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.782234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.782248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.782261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.782290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.792180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.792312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.792337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.792351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.792363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.792392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.802217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.802314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.802340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.802354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.802367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.802395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.812219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.812310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.812335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.812349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.812362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.812390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.822244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.822333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.822357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.822371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.822383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.822411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.598 [2024-11-05 12:51:56.832292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.598 [2024-11-05 12:51:56.832370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.598 [2024-11-05 12:51:56.832395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.598 [2024-11-05 12:51:56.832408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.598 [2024-11-05 12:51:56.832421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.598 [2024-11-05 12:51:56.832449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.598 qpair failed and we were unable to recover it. 00:37:27.857 [2024-11-05 12:51:56.842296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.857 [2024-11-05 12:51:56.842387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.857 [2024-11-05 12:51:56.842412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.857 [2024-11-05 12:51:56.842426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.857 [2024-11-05 12:51:56.842439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.857 [2024-11-05 12:51:56.842467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.857 qpair failed and we were unable to recover it. 00:37:27.857 [2024-11-05 12:51:56.852396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.857 [2024-11-05 12:51:56.852491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.857 [2024-11-05 12:51:56.852515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.857 [2024-11-05 12:51:56.852528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.857 [2024-11-05 12:51:56.852540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.857 [2024-11-05 12:51:56.852569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.857 qpair failed and we were unable to recover it. 00:37:27.857 [2024-11-05 12:51:56.862403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.857 [2024-11-05 12:51:56.862515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.857 [2024-11-05 12:51:56.862542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.857 [2024-11-05 12:51:56.862561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.857 [2024-11-05 12:51:56.862574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.857 [2024-11-05 12:51:56.862602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.857 qpair failed and we were unable to recover it. 00:37:27.857 [2024-11-05 12:51:56.872417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.857 [2024-11-05 12:51:56.872533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.857 [2024-11-05 12:51:56.872563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.857 [2024-11-05 12:51:56.872579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.857 [2024-11-05 12:51:56.872591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.857 [2024-11-05 12:51:56.872620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.857 qpair failed and we were unable to recover it. 00:37:27.857 [2024-11-05 12:51:56.882468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.857 [2024-11-05 12:51:56.882549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.857 [2024-11-05 12:51:56.882576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.857 [2024-11-05 12:51:56.882590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.857 [2024-11-05 12:51:56.882603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.857 [2024-11-05 12:51:56.882632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.857 qpair failed and we were unable to recover it. 00:37:27.857 [2024-11-05 12:51:56.892455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.857 [2024-11-05 12:51:56.892542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.857 [2024-11-05 12:51:56.892566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.892580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.892592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.892621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.902513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.902611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.902640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.902657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.902669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.902699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.912520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.912611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.912636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.912650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.912662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.912690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.922504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.922584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.922610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.922624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.922636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.922664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.932578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.932679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.932705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.932720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.932732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.932760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.942598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.942680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.942705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.942718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.942730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.942758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.952635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.952723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.952748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.952762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.952774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.952802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.962659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.962738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.962763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.962777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.962789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.962817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.972725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.972818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.972843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.972866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.972882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.972911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.982698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.982782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.982806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.982820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.982832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.982867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:56.992731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:56.992852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:56.992886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:56.992910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:56.992923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:56.992951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:57.002751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:57.002830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:57.002856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:57.002877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:57.002890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:57.002918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:57.012818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:57.012919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:57.012946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:57.012960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.858 [2024-11-05 12:51:57.012972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.858 [2024-11-05 12:51:57.013000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.858 qpair failed and we were unable to recover it. 00:37:27.858 [2024-11-05 12:51:57.022890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.858 [2024-11-05 12:51:57.023024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.858 [2024-11-05 12:51:57.023050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.858 [2024-11-05 12:51:57.023064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.023076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.023106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.032866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.032966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.032991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.033005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.033018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.033051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.042892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.042985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.043013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.043028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.043043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.043072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.052924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.053012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.053036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.053050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.053062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.053091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.062953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.063070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.063099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.063116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.063128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.063157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.072976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.073081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.073108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.073123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.073135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.073163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.082985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.083112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.083139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.083153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.083165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.083194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:27.859 [2024-11-05 12:51:57.093023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:27.859 [2024-11-05 12:51:57.093116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:27.859 [2024-11-05 12:51:57.093141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:27.859 [2024-11-05 12:51:57.093155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:27.859 [2024-11-05 12:51:57.093167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:27.859 [2024-11-05 12:51:57.093195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:27.859 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.103039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.103148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.103177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.103192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.103204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.103232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.113092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.113216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.113242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.113256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.113269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.113297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.123109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.123235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.123261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.123281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.123294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.123323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.133124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.133247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.133271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.133285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.133297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.133325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.143161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.143259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.143287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.143303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.143315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.143343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.153182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.153272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.153297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.153311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.153323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.153351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.163229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.163352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.163378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.163392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.163405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.118 [2024-11-05 12:51:57.163439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.118 qpair failed and we were unable to recover it. 00:37:28.118 [2024-11-05 12:51:57.173282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.118 [2024-11-05 12:51:57.173375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.118 [2024-11-05 12:51:57.173400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.118 [2024-11-05 12:51:57.173415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.118 [2024-11-05 12:51:57.173427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.173456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.183329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.183447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.183473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.183487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.183499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.183528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.193269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.193370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.193395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.193409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.193421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.193449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.203304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.203383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.203407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.203421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.203433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.203461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.213378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.213472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.213497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.213510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.213522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.213553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.223418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.223509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.223533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.223546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.223560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.223588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.233419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.233497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.233521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.233535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.233547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.233575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.243452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.243568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.243593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.243608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.243620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.243648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.253482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.253573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.253597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.253616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.253629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.253657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.263481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.263568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.263594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.263608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.263620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.263648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.273515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.273602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.273627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.273641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.273653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.273682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.283519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.283652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.283678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.283692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.283703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.283731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.293612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.293714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.293740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.293754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.293766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.293799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.303608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.119 [2024-11-05 12:51:57.303701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.119 [2024-11-05 12:51:57.303728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.119 [2024-11-05 12:51:57.303748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.119 [2024-11-05 12:51:57.303762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.119 [2024-11-05 12:51:57.303791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.119 qpair failed and we were unable to recover it. 00:37:28.119 [2024-11-05 12:51:57.313628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.120 [2024-11-05 12:51:57.313755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.120 [2024-11-05 12:51:57.313781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.120 [2024-11-05 12:51:57.313795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.120 [2024-11-05 12:51:57.313808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.120 [2024-11-05 12:51:57.313836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.120 qpair failed and we were unable to recover it. 00:37:28.120 [2024-11-05 12:51:57.323625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.120 [2024-11-05 12:51:57.323711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.120 [2024-11-05 12:51:57.323734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.120 [2024-11-05 12:51:57.323748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.120 [2024-11-05 12:51:57.323759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.120 [2024-11-05 12:51:57.323788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.120 qpair failed and we were unable to recover it. 00:37:28.120 [2024-11-05 12:51:57.333687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.120 [2024-11-05 12:51:57.333782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.120 [2024-11-05 12:51:57.333808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.120 [2024-11-05 12:51:57.333822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.120 [2024-11-05 12:51:57.333834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.120 [2024-11-05 12:51:57.333871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.120 qpair failed and we were unable to recover it. 00:37:28.120 [2024-11-05 12:51:57.343758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.120 [2024-11-05 12:51:57.343853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.120 [2024-11-05 12:51:57.343885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.120 [2024-11-05 12:51:57.343899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.120 [2024-11-05 12:51:57.343911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.120 [2024-11-05 12:51:57.343939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.120 qpair failed and we were unable to recover it. 00:37:28.120 [2024-11-05 12:51:57.353771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.120 [2024-11-05 12:51:57.353910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.120 [2024-11-05 12:51:57.353936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.120 [2024-11-05 12:51:57.353950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.120 [2024-11-05 12:51:57.353963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.120 [2024-11-05 12:51:57.353991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.120 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.363760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.363841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.363874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.363889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.363901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.378 [2024-11-05 12:51:57.363930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.373811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.373935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.373962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.373976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.373989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.378 [2024-11-05 12:51:57.374018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.383842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.383938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.383962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.383982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.383994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f8690 00:37:28.378 [2024-11-05 12:51:57.384023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.393846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.393960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.393990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.394006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.394018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:28.378 [2024-11-05 12:51:57.394050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.403883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.403982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.404010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.404024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.404036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47b4000b90 00:37:28.378 [2024-11-05 12:51:57.404066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.413971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.414073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.414105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.414121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.414134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47ac000b90 00:37:28.378 [2024-11-05 12:51:57.414166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 [2024-11-05 12:51:57.424023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:28.378 [2024-11-05 12:51:57.424112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:28.378 [2024-11-05 12:51:57.424141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:28.378 [2024-11-05 12:51:57.424156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.378 [2024-11-05 12:51:57.424168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f47ac000b90 00:37:28.378 [2024-11-05 12:51:57.424218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:37:28.378 qpair failed and we were unable to recover it. 00:37:28.378 Controller properly reset. 00:37:28.378 Initializing NVMe Controllers 00:37:28.378 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:28.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:28.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:28.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:28.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:28.378 Initialization complete. Launching workers. 00:37:28.378 Starting thread on core 1 00:37:28.378 Starting thread on core 2 00:37:28.378 Starting thread on core 3 00:37:28.378 Starting thread on core 0 00:37:28.378 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:28.378 00:37:28.378 real 0m10.889s 00:37:28.378 user 0m19.143s 00:37:28.378 sys 0m5.005s 00:37:28.378 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:28.378 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:28.378 ************************************ 00:37:28.378 END TEST nvmf_target_disconnect_tc2 00:37:28.379 ************************************ 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:28.379 rmmod nvme_tcp 00:37:28.379 rmmod nvme_fabrics 00:37:28.379 rmmod nvme_keyring 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 816573 ']' 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 816573 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 816573 ']' 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 816573 00:37:28.379 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 816573 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 816573' 00:37:28.636 killing process with pid 816573 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 816573 00:37:28.636 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 816573 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:28.894 12:51:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:30.796 12:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:30.796 00:37:30.796 real 0m15.991s 00:37:30.796 user 0m46.056s 00:37:30.796 sys 0m7.156s 00:37:30.796 12:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:30.796 12:51:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:30.796 ************************************ 00:37:30.796 END TEST nvmf_target_disconnect 00:37:30.796 ************************************ 00:37:30.796 12:51:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:30.796 00:37:30.796 real 6m42.813s 00:37:30.796 user 17m13.607s 00:37:30.796 sys 1m26.914s 00:37:30.796 12:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:30.796 12:51:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:30.796 ************************************ 00:37:30.796 END TEST nvmf_host 00:37:30.796 ************************************ 00:37:30.796 12:51:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:30.796 12:51:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:30.796 12:51:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:30.796 12:51:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:30.796 12:51:59 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:30.796 12:51:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:30.796 ************************************ 00:37:30.796 START TEST nvmf_target_core_interrupt_mode 00:37:30.796 ************************************ 00:37:30.796 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:31.054 * Looking for test storage... 00:37:31.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:31.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.054 --rc genhtml_branch_coverage=1 00:37:31.054 --rc genhtml_function_coverage=1 00:37:31.054 --rc genhtml_legend=1 00:37:31.054 --rc geninfo_all_blocks=1 00:37:31.054 --rc geninfo_unexecuted_blocks=1 00:37:31.054 00:37:31.054 ' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:31.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.054 --rc genhtml_branch_coverage=1 00:37:31.054 --rc genhtml_function_coverage=1 00:37:31.054 --rc genhtml_legend=1 00:37:31.054 --rc geninfo_all_blocks=1 00:37:31.054 --rc geninfo_unexecuted_blocks=1 00:37:31.054 00:37:31.054 ' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:31.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.054 --rc genhtml_branch_coverage=1 00:37:31.054 --rc genhtml_function_coverage=1 00:37:31.054 --rc genhtml_legend=1 00:37:31.054 --rc geninfo_all_blocks=1 00:37:31.054 --rc geninfo_unexecuted_blocks=1 00:37:31.054 00:37:31.054 ' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:31.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.054 --rc genhtml_branch_coverage=1 00:37:31.054 --rc genhtml_function_coverage=1 00:37:31.054 --rc genhtml_legend=1 00:37:31.054 --rc geninfo_all_blocks=1 00:37:31.054 --rc geninfo_unexecuted_blocks=1 00:37:31.054 00:37:31.054 ' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.054 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:31.055 ************************************ 00:37:31.055 START TEST nvmf_abort 00:37:31.055 ************************************ 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:31.055 * Looking for test storage... 00:37:31.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:37:31.055 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.313 --rc genhtml_branch_coverage=1 00:37:31.313 --rc genhtml_function_coverage=1 00:37:31.313 --rc genhtml_legend=1 00:37:31.313 --rc geninfo_all_blocks=1 00:37:31.313 --rc geninfo_unexecuted_blocks=1 00:37:31.313 00:37:31.313 ' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.313 --rc genhtml_branch_coverage=1 00:37:31.313 --rc genhtml_function_coverage=1 00:37:31.313 --rc genhtml_legend=1 00:37:31.313 --rc geninfo_all_blocks=1 00:37:31.313 --rc geninfo_unexecuted_blocks=1 00:37:31.313 00:37:31.313 ' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.313 --rc genhtml_branch_coverage=1 00:37:31.313 --rc genhtml_function_coverage=1 00:37:31.313 --rc genhtml_legend=1 00:37:31.313 --rc geninfo_all_blocks=1 00:37:31.313 --rc geninfo_unexecuted_blocks=1 00:37:31.313 00:37:31.313 ' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:31.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:31.313 --rc genhtml_branch_coverage=1 00:37:31.313 --rc genhtml_function_coverage=1 00:37:31.313 --rc genhtml_legend=1 00:37:31.313 --rc geninfo_all_blocks=1 00:37:31.313 --rc geninfo_unexecuted_blocks=1 00:37:31.313 00:37:31.313 ' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:31.313 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:31.314 12:52:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:33.842 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:33.843 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:33.843 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:33.843 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:33.843 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:33.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:37:33.843 00:37:33.843 --- 10.0.0.2 ping statistics --- 00:37:33.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.843 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:37:33.843 00:37:33.843 --- 10.0.0.1 ping statistics --- 00:37:33.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.843 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=819384 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 819384 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 819384 ']' 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:33.843 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.843 [2024-11-05 12:52:02.732574] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:33.843 [2024-11-05 12:52:02.733634] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:37:33.843 [2024-11-05 12:52:02.733700] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.843 [2024-11-05 12:52:02.807898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:33.843 [2024-11-05 12:52:02.852927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.843 [2024-11-05 12:52:02.852978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.844 [2024-11-05 12:52:02.853007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.844 [2024-11-05 12:52:02.853019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.844 [2024-11-05 12:52:02.853028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.844 [2024-11-05 12:52:02.854453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:33.844 [2024-11-05 12:52:02.854511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:33.844 [2024-11-05 12:52:02.854515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.844 [2024-11-05 12:52:02.936305] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:33.844 [2024-11-05 12:52:02.936540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:33.844 [2024-11-05 12:52:02.936547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:33.844 [2024-11-05 12:52:02.936781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 [2024-11-05 12:52:02.991229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 Malloc0 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 Delay0 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.844 [2024-11-05 12:52:03.071434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.844 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:34.102 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.102 12:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:34.102 [2024-11-05 12:52:03.150333] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:35.998 Initializing NVMe Controllers 00:37:35.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:35.998 controller IO queue size 128 less than required 00:37:35.998 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:35.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:35.998 Initialization complete. Launching workers. 00:37:35.998 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28371 00:37:35.998 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28428, failed to submit 66 00:37:35.998 success 28371, unsuccessful 57, failed 0 00:37:35.998 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.998 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.998 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.999 rmmod nvme_tcp 00:37:35.999 rmmod nvme_fabrics 00:37:35.999 rmmod nvme_keyring 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 819384 ']' 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 819384 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 819384 ']' 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 819384 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:35.999 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 819384 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 819384' 00:37:36.257 killing process with pid 819384 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 819384 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 819384 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.257 12:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.794 00:37:38.794 real 0m7.310s 00:37:38.794 user 0m8.994s 00:37:38.794 sys 0m2.932s 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:38.794 ************************************ 00:37:38.794 END TEST nvmf_abort 00:37:38.794 ************************************ 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:38.794 ************************************ 00:37:38.794 START TEST nvmf_ns_hotplug_stress 00:37:38.794 ************************************ 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:38.794 * Looking for test storage... 00:37:38.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:38.794 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:38.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.795 --rc genhtml_branch_coverage=1 00:37:38.795 --rc genhtml_function_coverage=1 00:37:38.795 --rc genhtml_legend=1 00:37:38.795 --rc geninfo_all_blocks=1 00:37:38.795 --rc geninfo_unexecuted_blocks=1 00:37:38.795 00:37:38.795 ' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:38.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.795 --rc genhtml_branch_coverage=1 00:37:38.795 --rc genhtml_function_coverage=1 00:37:38.795 --rc genhtml_legend=1 00:37:38.795 --rc geninfo_all_blocks=1 00:37:38.795 --rc geninfo_unexecuted_blocks=1 00:37:38.795 00:37:38.795 ' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:38.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.795 --rc genhtml_branch_coverage=1 00:37:38.795 --rc genhtml_function_coverage=1 00:37:38.795 --rc genhtml_legend=1 00:37:38.795 --rc geninfo_all_blocks=1 00:37:38.795 --rc geninfo_unexecuted_blocks=1 00:37:38.795 00:37:38.795 ' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:38.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.795 --rc genhtml_branch_coverage=1 00:37:38.795 --rc genhtml_function_coverage=1 00:37:38.795 --rc genhtml_legend=1 00:37:38.795 --rc geninfo_all_blocks=1 00:37:38.795 --rc geninfo_unexecuted_blocks=1 00:37:38.795 00:37:38.795 ' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:38.795 12:52:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:40.698 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:40.699 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:40.699 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:40.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:40.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:40.699 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:40.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:40.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:37:40.700 00:37:40.700 --- 10.0.0.2 ping statistics --- 00:37:40.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:40.700 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:40.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:40.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:37:40.700 00:37:40.700 --- 10.0.0.1 ping statistics --- 00:37:40.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:40.700 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:40.700 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=821596 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 821596 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 821596 ']' 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:40.958 12:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:40.958 [2024-11-05 12:52:09.986309] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:40.958 [2024-11-05 12:52:09.987359] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:37:40.958 [2024-11-05 12:52:09.987413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:40.958 [2024-11-05 12:52:10.071995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:40.958 [2024-11-05 12:52:10.124273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:40.958 [2024-11-05 12:52:10.124340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:40.958 [2024-11-05 12:52:10.124370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:40.958 [2024-11-05 12:52:10.124382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:40.958 [2024-11-05 12:52:10.124393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:40.958 [2024-11-05 12:52:10.126005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:40.958 [2024-11-05 12:52:10.126032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:40.958 [2024-11-05 12:52:10.126039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:41.216 [2024-11-05 12:52:10.225074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:41.216 [2024-11-05 12:52:10.225232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:41.216 [2024-11-05 12:52:10.225269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:41.216 [2024-11-05 12:52:10.225523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:41.216 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:41.474 [2024-11-05 12:52:10.542814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.474 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:41.733 12:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:41.992 [2024-11-05 12:52:11.079185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.992 12:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:42.251 12:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:42.510 Malloc0 00:37:42.510 12:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:42.767 Delay0 00:37:42.767 12:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:43.025 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:43.282 NULL1 00:37:43.540 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:43.797 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=822011 00:37:43.797 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:43.797 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:43.797 12:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.729 Read completed with error (sct=0, sc=11) 00:37:44.729 12:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.729 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:44.986 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:44.986 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:45.244 true 00:37:45.244 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:45.244 12:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.179 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:46.437 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:46.437 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:46.695 true 00:37:46.695 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:46.695 12:52:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.953 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:47.211 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:47.211 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:47.469 true 00:37:47.469 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:47.469 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.727 12:52:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:47.985 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:47.985 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:48.243 true 00:37:48.243 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:48.243 12:52:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:49.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.179 12:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:49.437 12:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:49.437 12:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:49.696 true 00:37:49.696 12:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:49.696 12:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:49.954 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:50.212 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:50.212 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:50.469 true 00:37:50.469 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:50.469 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.727 12:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:51.292 12:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:51.292 12:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:51.292 true 00:37:51.292 12:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:51.292 12:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.225 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:52.482 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:52.482 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:52.740 true 00:37:52.740 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:52.740 12:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.998 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:53.564 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:53.564 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:53.564 true 00:37:53.564 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:53.564 12:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.822 12:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:54.391 12:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:54.391 12:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:54.391 true 00:37:54.391 12:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:54.391 12:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:55.328 12:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:55.586 12:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:55.586 12:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:55.844 true 00:37:55.844 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:55.844 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.410 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:56.410 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:56.410 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:56.668 true 00:37:56.668 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:56.668 12:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.926 12:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:57.492 12:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:57.492 12:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:57.492 true 00:37:57.492 12:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:57.492 12:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:58.426 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:58.683 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:58.683 12:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:58.940 true 00:37:58.940 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:58.940 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.198 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.456 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:59.456 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:59.714 true 00:37:59.714 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:37:59.714 12:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.972 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:00.230 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:00.488 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:00.488 true 00:38:00.745 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:00.745 12:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:01.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:01.683 12:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:01.683 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:01.941 12:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:01.941 12:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:02.220 true 00:38:02.220 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:02.220 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:02.563 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.833 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:02.834 12:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:02.834 true 00:38:02.834 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:02.834 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.092 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.350 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:03.350 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:03.610 true 00:38:03.869 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:03.869 12:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:04.803 12:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:04.803 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:04.803 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:05.061 true 00:38:05.061 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:05.061 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.629 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.629 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:05.629 12:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:05.887 true 00:38:05.887 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:05.887 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:06.146 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.711 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:06.711 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:06.711 true 00:38:06.711 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:06.711 12:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.649 12:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.906 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:07.906 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:08.164 true 00:38:08.164 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:08.164 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:08.730 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:08.730 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:08.730 12:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:08.988 true 00:38:08.988 12:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:08.988 12:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.247 12:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.507 12:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:09.507 12:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:10.073 true 00:38:10.073 12:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:10.073 12:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:11.006 12:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:11.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:11.264 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:11.264 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:11.522 true 00:38:11.522 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:11.522 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:11.780 12:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.039 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:12.039 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:12.298 true 00:38:12.298 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:12.298 12:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.231 12:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.231 12:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:13.231 12:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:13.488 true 00:38:13.488 12:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:13.488 12:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:13.744 12:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:13.744 Initializing NVMe Controllers 00:38:13.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:13.744 Controller IO queue size 128, less than required. 00:38:13.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:13.744 Controller IO queue size 128, less than required. 00:38:13.744 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:13.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:13.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:13.744 Initialization complete. Launching workers. 00:38:13.744 ======================================================== 00:38:13.744 Latency(us) 00:38:13.744 Device Information : IOPS MiB/s Average min max 00:38:13.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 517.11 0.25 101110.35 3192.77 1025416.38 00:38:13.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8516.86 4.16 15028.72 2222.42 458455.95 00:38:13.744 ======================================================== 00:38:13.744 Total : 9033.97 4.41 19956.13 2222.42 1025416.38 00:38:13.744 00:38:14.001 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:14.001 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:14.258 true 00:38:14.259 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 822011 00:38:14.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (822011) - No such process 00:38:14.259 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 822011 00:38:14.259 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.516 12:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:15.084 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:15.085 null0 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:15.085 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:15.343 null1 00:38:15.343 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:15.343 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:15.343 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:15.603 null2 00:38:15.603 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:15.603 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:15.603 12:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:15.863 null3 00:38:15.863 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:15.863 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:15.863 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:16.122 null4 00:38:16.381 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:16.381 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:16.381 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:16.381 null5 00:38:16.639 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:16.639 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:16.639 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:16.897 null6 00:38:16.897 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:16.897 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:16.897 12:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:17.156 null7 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 826013 826014 826016 826018 826020 826022 826024 826026 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.156 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:17.415 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:17.673 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.674 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.674 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:17.674 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:17.674 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:17.674 12:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:17.932 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.190 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.191 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:18.191 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:18.191 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:18.191 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:18.448 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.449 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:19.015 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.016 12:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:19.016 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:19.016 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:19.016 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:19.274 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:19.274 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:19.274 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:19.274 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.274 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.532 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:19.533 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:19.790 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.791 12:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.049 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:20.307 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.565 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.565 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.565 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:20.565 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.565 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:20.566 12:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:20.827 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.085 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.085 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.085 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:21.343 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:21.601 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:21.860 12:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:22.118 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:22.376 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.377 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:22.635 12:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:22.894 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.894 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:22.894 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:22.894 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:23.154 rmmod nvme_tcp 00:38:23.154 rmmod nvme_fabrics 00:38:23.154 rmmod nvme_keyring 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 821596 ']' 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 821596 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 821596 ']' 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 821596 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 821596 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 821596' 00:38:23.154 killing process with pid 821596 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 821596 00:38:23.154 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 821596 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.415 12:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:25.321 00:38:25.321 real 0m46.941s 00:38:25.321 user 3m17.300s 00:38:25.321 sys 0m21.021s 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:25.321 ************************************ 00:38:25.321 END TEST nvmf_ns_hotplug_stress 00:38:25.321 ************************************ 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:25.321 ************************************ 00:38:25.321 START TEST nvmf_delete_subsystem 00:38:25.321 ************************************ 00:38:25.321 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:38:25.580 * Looking for test storage... 00:38:25.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:38:25.580 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:25.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.581 --rc genhtml_branch_coverage=1 00:38:25.581 --rc genhtml_function_coverage=1 00:38:25.581 --rc genhtml_legend=1 00:38:25.581 --rc geninfo_all_blocks=1 00:38:25.581 --rc geninfo_unexecuted_blocks=1 00:38:25.581 00:38:25.581 ' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:25.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.581 --rc genhtml_branch_coverage=1 00:38:25.581 --rc genhtml_function_coverage=1 00:38:25.581 --rc genhtml_legend=1 00:38:25.581 --rc geninfo_all_blocks=1 00:38:25.581 --rc geninfo_unexecuted_blocks=1 00:38:25.581 00:38:25.581 ' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:25.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.581 --rc genhtml_branch_coverage=1 00:38:25.581 --rc genhtml_function_coverage=1 00:38:25.581 --rc genhtml_legend=1 00:38:25.581 --rc geninfo_all_blocks=1 00:38:25.581 --rc geninfo_unexecuted_blocks=1 00:38:25.581 00:38:25.581 ' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:25.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.581 --rc genhtml_branch_coverage=1 00:38:25.581 --rc genhtml_function_coverage=1 00:38:25.581 --rc genhtml_legend=1 00:38:25.581 --rc geninfo_all_blocks=1 00:38:25.581 --rc geninfo_unexecuted_blocks=1 00:38:25.581 00:38:25.581 ' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:38:25.581 12:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.114 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:28.115 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:28.115 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:28.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:28.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:28.115 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:28.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:28.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:38:28.116 00:38:28.116 --- 10.0.0.2 ping statistics --- 00:38:28.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.116 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:28.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:28.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:38:28.116 00:38:28.116 --- 10.0.0.1 ping statistics --- 00:38:28.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:28.116 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=828770 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 828770 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 828770 ']' 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:28.116 12:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 [2024-11-05 12:52:57.027992] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:28.116 [2024-11-05 12:52:57.029110] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:38:28.116 [2024-11-05 12:52:57.029189] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:28.116 [2024-11-05 12:52:57.105249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:28.116 [2024-11-05 12:52:57.151341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.116 [2024-11-05 12:52:57.151412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.116 [2024-11-05 12:52:57.151441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.116 [2024-11-05 12:52:57.151453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.116 [2024-11-05 12:52:57.151462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.116 [2024-11-05 12:52:57.152849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:28.116 [2024-11-05 12:52:57.152854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.116 [2024-11-05 12:52:57.235303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:28.116 [2024-11-05 12:52:57.235341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:28.116 [2024-11-05 12:52:57.235597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 [2024-11-05 12:52:57.294473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 [2024-11-05 12:52:57.313820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 NULL1 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 Delay0 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=828800 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:28.116 12:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:28.374 [2024-11-05 12:52:57.389947] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:30.279 12:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:30.279 12:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.279 12:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.279 Write completed with error (sct=0, sc=8) 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 [2024-11-05 12:52:59.512279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff184000c40 is same with the state(6) to be set 00:38:30.279 Read completed with error (sct=0, sc=8) 00:38:30.279 starting I/O failed: -6 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 starting I/O failed: -6 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 [2024-11-05 12:52:59.513045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244af70 is same with the state(6) to be set 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 [2024-11-05 12:52:59.513525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff18400d350 is same with the state(6) to be set 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Read completed with error (sct=0, sc=8) 00:38:30.280 Write completed with error (sct=0, sc=8) 00:38:31.725 [2024-11-05 12:53:00.490707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449190 is same with the state(6) to be set 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 [2024-11-05 12:53:00.510879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244b510 is same with the state(6) to be set 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 [2024-11-05 12:53:00.511085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244b150 is same with the state(6) to be set 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.725 Write completed with error (sct=0, sc=8) 00:38:31.725 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 [2024-11-05 12:53:00.514051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff18400d020 is same with the state(6) to be set 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Write completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 Read completed with error (sct=0, sc=8) 00:38:31.726 [2024-11-05 12:53:00.515592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff18400d680 is same with the state(6) to be set 00:38:31.726 Initializing NVMe Controllers 00:38:31.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:31.726 Controller IO queue size 128, less than required. 00:38:31.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:31.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:31.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:31.726 Initialization complete. Launching workers. 00:38:31.726 ======================================================== 00:38:31.726 Latency(us) 00:38:31.726 Device Information : IOPS MiB/s Average min max 00:38:31.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.12 0.08 901855.74 543.25 1014066.72 00:38:31.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.64 0.08 906832.15 1260.61 1012085.40 00:38:31.726 ======================================================== 00:38:31.726 Total : 331.76 0.16 904325.35 543.25 1014066.72 00:38:31.726 00:38:31.726 [2024-11-05 12:53:00.516076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2449190 (9): Bad file descriptor 00:38:31.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:31.726 12:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.726 12:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:31.726 12:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 828800 00:38:31.726 12:53:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 828800 00:38:32.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (828800) - No such process 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 828800 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 828800 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 828800 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:32.012 [2024-11-05 12:53:01.037735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=829317 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:32.012 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:32.012 [2024-11-05 12:53:01.099053] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:32.578 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:32.578 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:32.578 12:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:32.836 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:32.836 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:32.836 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:33.403 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:33.403 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:33.403 12:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:33.969 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:33.970 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:33.970 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:34.538 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:34.538 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:34.538 12:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:35.103 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:35.103 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:35.103 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:35.103 Initializing NVMe Controllers 00:38:35.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:35.103 Controller IO queue size 128, less than required. 00:38:35.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:35.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:35.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:35.103 Initialization complete. Launching workers. 00:38:35.103 ======================================================== 00:38:35.103 Latency(us) 00:38:35.103 Device Information : IOPS MiB/s Average min max 00:38:35.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004637.54 1000277.24 1013787.03 00:38:35.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004891.16 1000246.07 1011371.34 00:38:35.103 ======================================================== 00:38:35.103 Total : 256.00 0.12 1004764.35 1000246.07 1013787.03 00:38:35.103 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 829317 00:38:35.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (829317) - No such process 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 829317 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:35.361 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:35.361 rmmod nvme_tcp 00:38:35.361 rmmod nvme_fabrics 00:38:35.361 rmmod nvme_keyring 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 828770 ']' 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 828770 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 828770 ']' 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 828770 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 828770 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 828770' 00:38:35.619 killing process with pid 828770 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 828770 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 828770 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.619 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:35.620 12:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.153 00:38:38.153 real 0m12.337s 00:38:38.153 user 0m24.530s 00:38:38.153 sys 0m3.794s 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:38.153 ************************************ 00:38:38.153 END TEST nvmf_delete_subsystem 00:38:38.153 ************************************ 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:38.153 ************************************ 00:38:38.153 START TEST nvmf_host_management 00:38:38.153 ************************************ 00:38:38.153 12:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:38.153 * Looking for test storage... 00:38:38.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.153 --rc genhtml_branch_coverage=1 00:38:38.153 --rc genhtml_function_coverage=1 00:38:38.153 --rc genhtml_legend=1 00:38:38.153 --rc geninfo_all_blocks=1 00:38:38.153 --rc geninfo_unexecuted_blocks=1 00:38:38.153 00:38:38.153 ' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.153 --rc genhtml_branch_coverage=1 00:38:38.153 --rc genhtml_function_coverage=1 00:38:38.153 --rc genhtml_legend=1 00:38:38.153 --rc geninfo_all_blocks=1 00:38:38.153 --rc geninfo_unexecuted_blocks=1 00:38:38.153 00:38:38.153 ' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.153 --rc genhtml_branch_coverage=1 00:38:38.153 --rc genhtml_function_coverage=1 00:38:38.153 --rc genhtml_legend=1 00:38:38.153 --rc geninfo_all_blocks=1 00:38:38.153 --rc geninfo_unexecuted_blocks=1 00:38:38.153 00:38:38.153 ' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.153 --rc genhtml_branch_coverage=1 00:38:38.153 --rc genhtml_function_coverage=1 00:38:38.153 --rc genhtml_legend=1 00:38:38.153 --rc geninfo_all_blocks=1 00:38:38.153 --rc geninfo_unexecuted_blocks=1 00:38:38.153 00:38:38.153 ' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.153 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:38.154 12:53:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:40.058 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:40.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:40.059 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:40.059 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:40.059 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:40.059 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:40.318 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:40.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:40.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:38:40.319 00:38:40.319 --- 10.0.0.2 ping statistics --- 00:38:40.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:40.319 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:40.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:40.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:38:40.319 00:38:40.319 --- 10.0.0.1 ping statistics --- 00:38:40.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:40.319 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=831655 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 831655 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 831655 ']' 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:40.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:40.319 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.319 [2024-11-05 12:53:09.521033] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:40.319 [2024-11-05 12:53:09.522122] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:38:40.319 [2024-11-05 12:53:09.522186] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:40.579 [2024-11-05 12:53:09.593654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:40.579 [2024-11-05 12:53:09.640958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:40.579 [2024-11-05 12:53:09.641013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:40.579 [2024-11-05 12:53:09.641042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:40.579 [2024-11-05 12:53:09.641054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:40.579 [2024-11-05 12:53:09.641064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:40.579 [2024-11-05 12:53:09.642684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:40.579 [2024-11-05 12:53:09.642750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:40.579 [2024-11-05 12:53:09.642814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:40.579 [2024-11-05 12:53:09.642816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:40.579 [2024-11-05 12:53:09.725291] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:40.579 [2024-11-05 12:53:09.725517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:40.579 [2024-11-05 12:53:09.725768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:40.579 [2024-11-05 12:53:09.726352] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:40.579 [2024-11-05 12:53:09.726571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.579 [2024-11-05 12:53:09.783513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.579 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.838 Malloc0 00:38:40.838 [2024-11-05 12:53:09.867957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=831702 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 831702 /var/tmp/bdevperf.sock 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 831702 ']' 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:40.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:40.838 { 00:38:40.838 "params": { 00:38:40.838 "name": "Nvme$subsystem", 00:38:40.838 "trtype": "$TEST_TRANSPORT", 00:38:40.838 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:40.838 "adrfam": "ipv4", 00:38:40.838 "trsvcid": "$NVMF_PORT", 00:38:40.838 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:40.838 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:40.838 "hdgst": ${hdgst:-false}, 00:38:40.838 "ddgst": ${ddgst:-false} 00:38:40.838 }, 00:38:40.838 "method": "bdev_nvme_attach_controller" 00:38:40.838 } 00:38:40.838 EOF 00:38:40.838 )") 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:40.838 12:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:40.838 "params": { 00:38:40.838 "name": "Nvme0", 00:38:40.838 "trtype": "tcp", 00:38:40.838 "traddr": "10.0.0.2", 00:38:40.838 "adrfam": "ipv4", 00:38:40.838 "trsvcid": "4420", 00:38:40.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.838 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:40.838 "hdgst": false, 00:38:40.838 "ddgst": false 00:38:40.838 }, 00:38:40.838 "method": "bdev_nvme_attach_controller" 00:38:40.838 }' 00:38:40.838 [2024-11-05 12:53:09.954060] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:38:40.838 [2024-11-05 12:53:09.954138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831702 ] 00:38:40.838 [2024-11-05 12:53:10.024976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.096 [2024-11-05 12:53:10.079899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.354 Running I/O for 10 seconds... 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:38:41.354 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:41.614 [2024-11-05 12:53:10.763769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.614 [2024-11-05 12:53:10.763827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.763853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.614 [2024-11-05 12:53:10.763886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.763901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.614 [2024-11-05 12:53:10.763915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.763929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:41.614 [2024-11-05 12:53:10.763942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.763956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bc970 is same with the state(6) to be set 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.614 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:41.614 [2024-11-05 12:53:10.769207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.614 [2024-11-05 12:53:10.769842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.614 [2024-11-05 12:53:10.769857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.769895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.769908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.769923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.769936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.769951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.769964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.769978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.769992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.770979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.615 [2024-11-05 12:53:10.770992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.615 [2024-11-05 12:53:10.771007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.616 [2024-11-05 12:53:10.771021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.616 [2024-11-05 12:53:10.771036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.616 [2024-11-05 12:53:10.771049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.616 [2024-11-05 12:53:10.771064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.616 [2024-11-05 12:53:10.771077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.616 [2024-11-05 12:53:10.771092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:41.616 [2024-11-05 12:53:10.771105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:41.616 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.616 12:53:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:41.616 [2024-11-05 12:53:10.772324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:41.616 task offset: 81920 on job bdev=Nvme0n1 fails 00:38:41.616 00:38:41.616 Latency(us) 00:38:41.616 [2024-11-05T11:53:10.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.616 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:41.616 Job: Nvme0n1 ended in about 0.40 seconds with error 00:38:41.616 Verification LBA range: start 0x0 length 0x400 00:38:41.616 Nvme0n1 : 0.40 1614.31 100.89 161.43 0.00 34977.06 2366.58 34175.81 00:38:41.616 [2024-11-05T11:53:10.854Z] =================================================================================================================== 00:38:41.616 [2024-11-05T11:53:10.854Z] Total : 1614.31 100.89 161.43 0.00 34977.06 2366.58 34175.81 00:38:41.616 [2024-11-05 12:53:10.774211] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:41.616 [2024-11-05 12:53:10.774259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bc970 (9): Bad file descriptor 00:38:41.616 [2024-11-05 12:53:10.818083] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 831702 00:38:42.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (831702) - No such process 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:42.548 { 00:38:42.548 "params": { 00:38:42.548 "name": "Nvme$subsystem", 00:38:42.548 "trtype": "$TEST_TRANSPORT", 00:38:42.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:42.548 "adrfam": "ipv4", 00:38:42.548 "trsvcid": "$NVMF_PORT", 00:38:42.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:42.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:42.548 "hdgst": ${hdgst:-false}, 00:38:42.548 "ddgst": ${ddgst:-false} 00:38:42.548 }, 00:38:42.548 "method": "bdev_nvme_attach_controller" 00:38:42.548 } 00:38:42.548 EOF 00:38:42.548 )") 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:42.548 12:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:42.548 "params": { 00:38:42.548 "name": "Nvme0", 00:38:42.548 "trtype": "tcp", 00:38:42.548 "traddr": "10.0.0.2", 00:38:42.548 "adrfam": "ipv4", 00:38:42.548 "trsvcid": "4420", 00:38:42.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:42.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:42.548 "hdgst": false, 00:38:42.548 "ddgst": false 00:38:42.548 }, 00:38:42.548 "method": "bdev_nvme_attach_controller" 00:38:42.548 }' 00:38:42.807 [2024-11-05 12:53:11.824083] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:38:42.807 [2024-11-05 12:53:11.824179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831974 ] 00:38:42.807 [2024-11-05 12:53:11.893924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.807 [2024-11-05 12:53:11.940529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.065 Running I/O for 1 seconds... 00:38:44.441 1664.00 IOPS, 104.00 MiB/s 00:38:44.441 Latency(us) 00:38:44.441 [2024-11-05T11:53:13.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.441 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:44.441 Verification LBA range: start 0x0 length 0x400 00:38:44.441 Nvme0n1 : 1.02 1686.30 105.39 0.00 0.00 37337.58 5437.06 33204.91 00:38:44.441 [2024-11-05T11:53:13.679Z] =================================================================================================================== 00:38:44.441 [2024-11-05T11:53:13.679Z] Total : 1686.30 105.39 0.00 0.00 37337.58 5437.06 33204.91 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:44.441 rmmod nvme_tcp 00:38:44.441 rmmod nvme_fabrics 00:38:44.441 rmmod nvme_keyring 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 831655 ']' 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 831655 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 831655 ']' 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 831655 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 831655 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 831655' 00:38:44.441 killing process with pid 831655 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 831655 00:38:44.441 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 831655 00:38:44.700 [2024-11-05 12:53:13.775788] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:44.700 12:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:47.233 00:38:47.233 real 0m8.911s 00:38:47.233 user 0m17.589s 00:38:47.233 sys 0m3.881s 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:47.233 ************************************ 00:38:47.233 END TEST nvmf_host_management 00:38:47.233 ************************************ 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:47.233 ************************************ 00:38:47.233 START TEST nvmf_lvol 00:38:47.233 ************************************ 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:47.233 * Looking for test storage... 00:38:47.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:38:47.233 12:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:47.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.233 --rc genhtml_branch_coverage=1 00:38:47.233 --rc genhtml_function_coverage=1 00:38:47.233 --rc genhtml_legend=1 00:38:47.233 --rc geninfo_all_blocks=1 00:38:47.233 --rc geninfo_unexecuted_blocks=1 00:38:47.233 00:38:47.233 ' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:47.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.233 --rc genhtml_branch_coverage=1 00:38:47.233 --rc genhtml_function_coverage=1 00:38:47.233 --rc genhtml_legend=1 00:38:47.233 --rc geninfo_all_blocks=1 00:38:47.233 --rc geninfo_unexecuted_blocks=1 00:38:47.233 00:38:47.233 ' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:47.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.233 --rc genhtml_branch_coverage=1 00:38:47.233 --rc genhtml_function_coverage=1 00:38:47.233 --rc genhtml_legend=1 00:38:47.233 --rc geninfo_all_blocks=1 00:38:47.233 --rc geninfo_unexecuted_blocks=1 00:38:47.233 00:38:47.233 ' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:47.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.233 --rc genhtml_branch_coverage=1 00:38:47.233 --rc genhtml_function_coverage=1 00:38:47.233 --rc genhtml_legend=1 00:38:47.233 --rc geninfo_all_blocks=1 00:38:47.233 --rc geninfo_unexecuted_blocks=1 00:38:47.233 00:38:47.233 ' 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:47.233 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:47.234 12:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:49.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:49.139 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:49.140 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:49.140 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:49.140 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:49.140 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:49.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:38:49.399 00:38:49.399 --- 10.0.0.2 ping statistics --- 00:38:49.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.399 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:49.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:38:49.399 00:38:49.399 --- 10.0.0.1 ping statistics --- 00:38:49.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.399 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=834178 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 834178 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 834178 ']' 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:49.399 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:49.399 [2024-11-05 12:53:18.531983] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:49.399 [2024-11-05 12:53:18.533058] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:38:49.399 [2024-11-05 12:53:18.533112] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.399 [2024-11-05 12:53:18.603371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:49.657 [2024-11-05 12:53:18.646889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.657 [2024-11-05 12:53:18.646949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.657 [2024-11-05 12:53:18.646977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.657 [2024-11-05 12:53:18.646988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.657 [2024-11-05 12:53:18.646997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.657 [2024-11-05 12:53:18.648415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.657 [2024-11-05 12:53:18.648482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:49.657 [2024-11-05 12:53:18.648485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.657 [2024-11-05 12:53:18.730196] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.657 [2024-11-05 12:53:18.730417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.657 [2024-11-05 12:53:18.730428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:49.657 [2024-11-05 12:53:18.730671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.657 12:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:49.915 [2024-11-05 12:53:19.037258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:49.915 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:50.174 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:50.174 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:50.431 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:50.431 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:50.999 12:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:51.258 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4fc064fe-c83f-4e4e-97da-550d42b1ecc7 00:38:51.258 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4fc064fe-c83f-4e4e-97da-550d42b1ecc7 lvol 20 00:38:51.517 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d8c8a730-5458-418c-ba5e-155ff5a14dc3 00:38:51.517 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:51.776 12:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d8c8a730-5458-418c-ba5e-155ff5a14dc3 00:38:52.033 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:52.292 [2024-11-05 12:53:21.345396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.292 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:52.550 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=834602 00:38:52.550 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:52.550 12:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:53.487 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d8c8a730-5458-418c-ba5e-155ff5a14dc3 MY_SNAPSHOT 00:38:53.744 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6b09a3a1-b91c-4739-8c8d-0079c188cafb 00:38:53.744 12:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d8c8a730-5458-418c-ba5e-155ff5a14dc3 30 00:38:54.310 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6b09a3a1-b91c-4739-8c8d-0079c188cafb MY_CLONE 00:38:54.568 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3c79126d-f9ec-4c07-98e4-74ed06ffbfad 00:38:54.568 12:53:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3c79126d-f9ec-4c07-98e4-74ed06ffbfad 00:38:55.135 12:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 834602 00:39:03.250 Initializing NVMe Controllers 00:39:03.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:03.250 Controller IO queue size 128, less than required. 00:39:03.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:03.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:03.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:03.250 Initialization complete. Launching workers. 00:39:03.250 ======================================================== 00:39:03.250 Latency(us) 00:39:03.250 Device Information : IOPS MiB/s Average min max 00:39:03.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10464.90 40.88 12237.28 5862.41 73154.83 00:39:03.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10594.50 41.38 12081.89 6753.67 58968.95 00:39:03.250 ======================================================== 00:39:03.250 Total : 21059.40 82.26 12159.11 5862.41 73154.83 00:39:03.250 00:39:03.250 12:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:03.250 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8c8a730-5458-418c-ba5e-155ff5a14dc3 00:39:03.508 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4fc064fe-c83f-4e4e-97da-550d42b1ecc7 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.768 rmmod nvme_tcp 00:39:03.768 rmmod nvme_fabrics 00:39:03.768 rmmod nvme_keyring 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 834178 ']' 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 834178 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 834178 ']' 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 834178 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:03.768 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 834178 00:39:03.769 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:03.769 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:03.769 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 834178' 00:39:03.769 killing process with pid 834178 00:39:03.769 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 834178 00:39:03.769 12:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 834178 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:04.028 12:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.564 00:39:06.564 real 0m19.268s 00:39:06.564 user 0m55.716s 00:39:06.564 sys 0m7.925s 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:06.564 ************************************ 00:39:06.564 END TEST nvmf_lvol 00:39:06.564 ************************************ 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:06.564 ************************************ 00:39:06.564 START TEST nvmf_lvs_grow 00:39:06.564 ************************************ 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:39:06.564 * Looking for test storage... 00:39:06.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:06.564 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.565 --rc genhtml_branch_coverage=1 00:39:06.565 --rc genhtml_function_coverage=1 00:39:06.565 --rc genhtml_legend=1 00:39:06.565 --rc geninfo_all_blocks=1 00:39:06.565 --rc geninfo_unexecuted_blocks=1 00:39:06.565 00:39:06.565 ' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.565 --rc genhtml_branch_coverage=1 00:39:06.565 --rc genhtml_function_coverage=1 00:39:06.565 --rc genhtml_legend=1 00:39:06.565 --rc geninfo_all_blocks=1 00:39:06.565 --rc geninfo_unexecuted_blocks=1 00:39:06.565 00:39:06.565 ' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.565 --rc genhtml_branch_coverage=1 00:39:06.565 --rc genhtml_function_coverage=1 00:39:06.565 --rc genhtml_legend=1 00:39:06.565 --rc geninfo_all_blocks=1 00:39:06.565 --rc geninfo_unexecuted_blocks=1 00:39:06.565 00:39:06.565 ' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:06.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.565 --rc genhtml_branch_coverage=1 00:39:06.565 --rc genhtml_function_coverage=1 00:39:06.565 --rc genhtml_legend=1 00:39:06.565 --rc geninfo_all_blocks=1 00:39:06.565 --rc geninfo_unexecuted_blocks=1 00:39:06.565 00:39:06.565 ' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.565 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:39:06.566 12:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:08.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:08.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:08.470 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:08.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:08.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:08.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:08.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:39:08.471 00:39:08.471 --- 10.0.0.2 ping statistics --- 00:39:08.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.471 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:08.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:08.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:39:08.471 00:39:08.471 --- 10.0.0.1 ping statistics --- 00:39:08.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.471 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=837852 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 837852 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 837852 ']' 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:08.471 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:08.731 [2024-11-05 12:53:37.757774] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:08.731 [2024-11-05 12:53:37.758977] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:08.731 [2024-11-05 12:53:37.759056] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.731 [2024-11-05 12:53:37.832760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.731 [2024-11-05 12:53:37.880009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.731 [2024-11-05 12:53:37.880077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.731 [2024-11-05 12:53:37.880105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.731 [2024-11-05 12:53:37.880125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.731 [2024-11-05 12:53:37.880134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.731 [2024-11-05 12:53:37.880728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.994 [2024-11-05 12:53:37.973331] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.994 [2024-11-05 12:53:37.973650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.994 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:08.994 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:39:08.994 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:08.994 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:08.994 12:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:08.994 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.994 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:09.252 [2024-11-05 12:53:38.285301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:09.252 ************************************ 00:39:09.252 START TEST lvs_grow_clean 00:39:09.252 ************************************ 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:09.252 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:09.511 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:09.511 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:09.771 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:09.771 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:09.771 12:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:10.029 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:10.029 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:10.029 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 lvol 150 00:39:10.289 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4bc04372-6bce-47d6-bdfd-0d3e9535ad86 00:39:10.289 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:10.289 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:10.614 [2024-11-05 12:53:39.725210] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:10.614 [2024-11-05 12:53:39.725295] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:10.614 true 00:39:10.614 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:10.614 12:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:10.889 12:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:10.889 12:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:11.147 12:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4bc04372-6bce-47d6-bdfd-0d3e9535ad86 00:39:11.405 12:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:11.662 [2024-11-05 12:53:40.885523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.663 12:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=838292 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 838292 /var/tmp/bdevperf.sock 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 838292 ']' 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:12.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:12.228 [2024-11-05 12:53:41.218497] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:12.228 [2024-11-05 12:53:41.218576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838292 ] 00:39:12.228 [2024-11-05 12:53:41.285511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.228 [2024-11-05 12:53:41.332716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:39:12.228 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:12.797 Nvme0n1 00:39:12.797 12:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:13.057 [ 00:39:13.057 { 00:39:13.057 "name": "Nvme0n1", 00:39:13.057 "aliases": [ 00:39:13.057 "4bc04372-6bce-47d6-bdfd-0d3e9535ad86" 00:39:13.057 ], 00:39:13.057 "product_name": "NVMe disk", 00:39:13.057 "block_size": 4096, 00:39:13.057 "num_blocks": 38912, 00:39:13.057 "uuid": "4bc04372-6bce-47d6-bdfd-0d3e9535ad86", 00:39:13.057 "numa_id": 0, 00:39:13.057 "assigned_rate_limits": { 00:39:13.057 "rw_ios_per_sec": 0, 00:39:13.057 "rw_mbytes_per_sec": 0, 00:39:13.057 "r_mbytes_per_sec": 0, 00:39:13.057 "w_mbytes_per_sec": 0 00:39:13.057 }, 00:39:13.057 "claimed": false, 00:39:13.057 "zoned": false, 00:39:13.057 "supported_io_types": { 00:39:13.057 "read": true, 00:39:13.057 "write": true, 00:39:13.057 "unmap": true, 00:39:13.057 "flush": true, 00:39:13.057 "reset": true, 00:39:13.057 "nvme_admin": true, 00:39:13.057 "nvme_io": true, 00:39:13.057 "nvme_io_md": false, 00:39:13.057 "write_zeroes": true, 00:39:13.057 "zcopy": false, 00:39:13.057 "get_zone_info": false, 00:39:13.057 "zone_management": false, 00:39:13.057 "zone_append": false, 00:39:13.057 "compare": true, 00:39:13.057 "compare_and_write": true, 00:39:13.057 "abort": true, 00:39:13.057 "seek_hole": false, 00:39:13.057 "seek_data": false, 00:39:13.057 "copy": true, 00:39:13.057 "nvme_iov_md": false 00:39:13.057 }, 00:39:13.057 "memory_domains": [ 00:39:13.057 { 00:39:13.057 "dma_device_id": "system", 00:39:13.057 "dma_device_type": 1 00:39:13.057 } 00:39:13.057 ], 00:39:13.057 "driver_specific": { 00:39:13.057 "nvme": [ 00:39:13.057 { 00:39:13.057 "trid": { 00:39:13.057 "trtype": "TCP", 00:39:13.057 "adrfam": "IPv4", 00:39:13.057 "traddr": "10.0.0.2", 00:39:13.057 "trsvcid": "4420", 00:39:13.057 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:13.057 }, 00:39:13.057 "ctrlr_data": { 00:39:13.057 "cntlid": 1, 00:39:13.057 "vendor_id": "0x8086", 00:39:13.057 "model_number": "SPDK bdev Controller", 00:39:13.057 "serial_number": "SPDK0", 00:39:13.057 "firmware_revision": "25.01", 00:39:13.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.057 "oacs": { 00:39:13.057 "security": 0, 00:39:13.057 "format": 0, 00:39:13.057 "firmware": 0, 00:39:13.057 "ns_manage": 0 00:39:13.057 }, 00:39:13.057 "multi_ctrlr": true, 00:39:13.057 "ana_reporting": false 00:39:13.057 }, 00:39:13.057 "vs": { 00:39:13.057 "nvme_version": "1.3" 00:39:13.057 }, 00:39:13.057 "ns_data": { 00:39:13.057 "id": 1, 00:39:13.057 "can_share": true 00:39:13.057 } 00:39:13.057 } 00:39:13.057 ], 00:39:13.057 "mp_policy": "active_passive" 00:39:13.057 } 00:39:13.057 } 00:39:13.057 ] 00:39:13.057 12:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=838403 00:39:13.057 12:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:13.057 12:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:13.057 Running I/O for 10 seconds... 00:39:13.996 Latency(us) 00:39:13.996 [2024-11-05T11:53:43.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:13.996 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:39:13.996 [2024-11-05T11:53:43.234Z] =================================================================================================================== 00:39:13.996 [2024-11-05T11:53:43.234Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:39:13.996 00:39:14.933 12:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:15.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:15.191 Nvme0n1 : 2.00 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:39:15.191 [2024-11-05T11:53:44.429Z] =================================================================================================================== 00:39:15.191 [2024-11-05T11:53:44.429Z] Total : 15049.50 58.79 0.00 0.00 0.00 0.00 0.00 00:39:15.191 00:39:15.191 true 00:39:15.191 12:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:15.191 12:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:15.451 12:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:15.451 12:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:15.451 12:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 838403 00:39:16.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.017 Nvme0n1 : 3.00 15161.00 59.22 0.00 0.00 0.00 0.00 0.00 00:39:16.017 [2024-11-05T11:53:45.255Z] =================================================================================================================== 00:39:16.017 [2024-11-05T11:53:45.255Z] Total : 15161.00 59.22 0.00 0.00 0.00 0.00 0.00 00:39:16.017 00:39:17.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:17.392 Nvme0n1 : 4.00 15276.00 59.67 0.00 0.00 0.00 0.00 0.00 00:39:17.392 [2024-11-05T11:53:46.630Z] =================================================================================================================== 00:39:17.392 [2024-11-05T11:53:46.630Z] Total : 15276.00 59.67 0.00 0.00 0.00 0.00 0.00 00:39:17.392 00:39:17.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:17.961 Nvme0n1 : 5.00 15351.80 59.97 0.00 0.00 0.00 0.00 0.00 00:39:17.961 [2024-11-05T11:53:47.199Z] =================================================================================================================== 00:39:17.961 [2024-11-05T11:53:47.199Z] Total : 15351.80 59.97 0.00 0.00 0.00 0.00 0.00 00:39:17.961 00:39:19.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:19.338 Nvme0n1 : 6.00 15402.33 60.17 0.00 0.00 0.00 0.00 0.00 00:39:19.338 [2024-11-05T11:53:48.576Z] =================================================================================================================== 00:39:19.338 [2024-11-05T11:53:48.576Z] Total : 15402.33 60.17 0.00 0.00 0.00 0.00 0.00 00:39:19.338 00:39:20.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:20.274 Nvme0n1 : 7.00 15460.86 60.39 0.00 0.00 0.00 0.00 0.00 00:39:20.274 [2024-11-05T11:53:49.512Z] =================================================================================================================== 00:39:20.274 [2024-11-05T11:53:49.512Z] Total : 15460.86 60.39 0.00 0.00 0.00 0.00 0.00 00:39:20.274 00:39:21.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:21.208 Nvme0n1 : 8.00 15493.00 60.52 0.00 0.00 0.00 0.00 0.00 00:39:21.208 [2024-11-05T11:53:50.446Z] =================================================================================================================== 00:39:21.208 [2024-11-05T11:53:50.446Z] Total : 15493.00 60.52 0.00 0.00 0.00 0.00 0.00 00:39:21.208 00:39:22.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:22.145 Nvme0n1 : 9.00 15507.22 60.58 0.00 0.00 0.00 0.00 0.00 00:39:22.145 [2024-11-05T11:53:51.383Z] =================================================================================================================== 00:39:22.145 [2024-11-05T11:53:51.383Z] Total : 15507.22 60.58 0.00 0.00 0.00 0.00 0.00 00:39:22.145 00:39:23.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:23.080 Nvme0n1 : 10.00 15544.00 60.72 0.00 0.00 0.00 0.00 0.00 00:39:23.080 [2024-11-05T11:53:52.318Z] =================================================================================================================== 00:39:23.080 [2024-11-05T11:53:52.318Z] Total : 15544.00 60.72 0.00 0.00 0.00 0.00 0.00 00:39:23.080 00:39:23.080 00:39:23.080 Latency(us) 00:39:23.080 [2024-11-05T11:53:52.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:23.080 Nvme0n1 : 10.01 15544.64 60.72 0.00 0.00 8229.79 4126.34 18058.81 00:39:23.080 [2024-11-05T11:53:52.318Z] =================================================================================================================== 00:39:23.080 [2024-11-05T11:53:52.318Z] Total : 15544.64 60.72 0.00 0.00 8229.79 4126.34 18058.81 00:39:23.080 { 00:39:23.080 "results": [ 00:39:23.080 { 00:39:23.080 "job": "Nvme0n1", 00:39:23.080 "core_mask": "0x2", 00:39:23.080 "workload": "randwrite", 00:39:23.080 "status": "finished", 00:39:23.080 "queue_depth": 128, 00:39:23.080 "io_size": 4096, 00:39:23.080 "runtime": 10.007821, 00:39:23.080 "iops": 15544.642535073319, 00:39:23.080 "mibps": 60.72125990263015, 00:39:23.080 "io_failed": 0, 00:39:23.080 "io_timeout": 0, 00:39:23.080 "avg_latency_us": 8229.792446013842, 00:39:23.080 "min_latency_us": 4126.34074074074, 00:39:23.080 "max_latency_us": 18058.80888888889 00:39:23.080 } 00:39:23.080 ], 00:39:23.080 "core_count": 1 00:39:23.080 } 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 838292 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 838292 ']' 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 838292 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 838292 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 838292' 00:39:23.080 killing process with pid 838292 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 838292 00:39:23.080 Received shutdown signal, test time was about 10.000000 seconds 00:39:23.080 00:39:23.080 Latency(us) 00:39:23.080 [2024-11-05T11:53:52.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:23.080 [2024-11-05T11:53:52.318Z] =================================================================================================================== 00:39:23.080 [2024-11-05T11:53:52.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:23.080 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 838292 00:39:23.338 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:23.597 12:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:23.856 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:23.856 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:24.115 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:24.115 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:24.115 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:24.374 [2024-11-05 12:53:53.537261] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:24.374 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:24.634 request: 00:39:24.634 { 00:39:24.634 "uuid": "13e1fc81-7ec0-453b-8d48-38b1af9b7ce4", 00:39:24.634 "method": "bdev_lvol_get_lvstores", 00:39:24.634 "req_id": 1 00:39:24.634 } 00:39:24.634 Got JSON-RPC error response 00:39:24.634 response: 00:39:24.634 { 00:39:24.634 "code": -19, 00:39:24.634 "message": "No such device" 00:39:24.634 } 00:39:24.634 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:39:24.634 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:24.634 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:24.634 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:24.634 12:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:24.892 aio_bdev 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4bc04372-6bce-47d6-bdfd-0d3e9535ad86 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=4bc04372-6bce-47d6-bdfd-0d3e9535ad86 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:39:25.152 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:25.411 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4bc04372-6bce-47d6-bdfd-0d3e9535ad86 -t 2000 00:39:25.671 [ 00:39:25.671 { 00:39:25.671 "name": "4bc04372-6bce-47d6-bdfd-0d3e9535ad86", 00:39:25.671 "aliases": [ 00:39:25.671 "lvs/lvol" 00:39:25.671 ], 00:39:25.671 "product_name": "Logical Volume", 00:39:25.671 "block_size": 4096, 00:39:25.671 "num_blocks": 38912, 00:39:25.671 "uuid": "4bc04372-6bce-47d6-bdfd-0d3e9535ad86", 00:39:25.671 "assigned_rate_limits": { 00:39:25.671 "rw_ios_per_sec": 0, 00:39:25.671 "rw_mbytes_per_sec": 0, 00:39:25.671 "r_mbytes_per_sec": 0, 00:39:25.671 "w_mbytes_per_sec": 0 00:39:25.671 }, 00:39:25.671 "claimed": false, 00:39:25.671 "zoned": false, 00:39:25.671 "supported_io_types": { 00:39:25.671 "read": true, 00:39:25.671 "write": true, 00:39:25.671 "unmap": true, 00:39:25.671 "flush": false, 00:39:25.671 "reset": true, 00:39:25.671 "nvme_admin": false, 00:39:25.671 "nvme_io": false, 00:39:25.671 "nvme_io_md": false, 00:39:25.671 "write_zeroes": true, 00:39:25.671 "zcopy": false, 00:39:25.671 "get_zone_info": false, 00:39:25.671 "zone_management": false, 00:39:25.671 "zone_append": false, 00:39:25.671 "compare": false, 00:39:25.671 "compare_and_write": false, 00:39:25.671 "abort": false, 00:39:25.671 "seek_hole": true, 00:39:25.671 "seek_data": true, 00:39:25.671 "copy": false, 00:39:25.671 "nvme_iov_md": false 00:39:25.671 }, 00:39:25.671 "driver_specific": { 00:39:25.671 "lvol": { 00:39:25.671 "lvol_store_uuid": "13e1fc81-7ec0-453b-8d48-38b1af9b7ce4", 00:39:25.671 "base_bdev": "aio_bdev", 00:39:25.671 "thin_provision": false, 00:39:25.671 "num_allocated_clusters": 38, 00:39:25.671 "snapshot": false, 00:39:25.671 "clone": false, 00:39:25.671 "esnap_clone": false 00:39:25.671 } 00:39:25.671 } 00:39:25.671 } 00:39:25.671 ] 00:39:25.671 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:39:25.671 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:25.671 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:25.931 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:25.931 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:25.931 12:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:26.191 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:26.191 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4bc04372-6bce-47d6-bdfd-0d3e9535ad86 00:39:26.452 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13e1fc81-7ec0-453b-8d48-38b1af9b7ce4 00:39:26.712 12:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:26.971 00:39:26.971 real 0m17.777s 00:39:26.971 user 0m17.361s 00:39:26.971 sys 0m1.781s 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:26.971 ************************************ 00:39:26.971 END TEST lvs_grow_clean 00:39:26.971 ************************************ 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:26.971 ************************************ 00:39:26.971 START TEST lvs_grow_dirty 00:39:26.971 ************************************ 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:26.971 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:26.972 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:26.972 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:26.972 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:26.972 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:26.972 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:27.229 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:27.229 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:27.796 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:27.796 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:27.796 12:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:27.796 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:27.796 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:27.796 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 lvol 150 00:39:28.055 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:28.055 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:28.055 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:28.315 [2024-11-05 12:53:57.541220] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:28.315 [2024-11-05 12:53:57.541339] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:28.315 true 00:39:28.575 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:28.575 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:28.833 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:28.833 12:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:29.093 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:29.353 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:29.613 [2024-11-05 12:53:58.637544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.613 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=840333 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 840333 /var/tmp/bdevperf.sock 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 840333 ']' 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:29.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:29.872 12:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:29.872 [2024-11-05 12:53:58.980639] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:29.872 [2024-11-05 12:53:58.980735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840333 ] 00:39:29.872 [2024-11-05 12:53:59.046191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.872 [2024-11-05 12:53:59.092268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:30.130 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:30.130 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:39:30.130 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:30.698 Nvme0n1 00:39:30.698 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:30.957 [ 00:39:30.957 { 00:39:30.957 "name": "Nvme0n1", 00:39:30.957 "aliases": [ 00:39:30.957 "05b449d1-7073-451e-a094-c3fe1a93efe0" 00:39:30.957 ], 00:39:30.957 "product_name": "NVMe disk", 00:39:30.957 "block_size": 4096, 00:39:30.957 "num_blocks": 38912, 00:39:30.957 "uuid": "05b449d1-7073-451e-a094-c3fe1a93efe0", 00:39:30.957 "numa_id": 0, 00:39:30.957 "assigned_rate_limits": { 00:39:30.957 "rw_ios_per_sec": 0, 00:39:30.957 "rw_mbytes_per_sec": 0, 00:39:30.957 "r_mbytes_per_sec": 0, 00:39:30.957 "w_mbytes_per_sec": 0 00:39:30.957 }, 00:39:30.957 "claimed": false, 00:39:30.957 "zoned": false, 00:39:30.957 "supported_io_types": { 00:39:30.957 "read": true, 00:39:30.957 "write": true, 00:39:30.957 "unmap": true, 00:39:30.957 "flush": true, 00:39:30.957 "reset": true, 00:39:30.957 "nvme_admin": true, 00:39:30.957 "nvme_io": true, 00:39:30.957 "nvme_io_md": false, 00:39:30.957 "write_zeroes": true, 00:39:30.957 "zcopy": false, 00:39:30.957 "get_zone_info": false, 00:39:30.957 "zone_management": false, 00:39:30.957 "zone_append": false, 00:39:30.957 "compare": true, 00:39:30.957 "compare_and_write": true, 00:39:30.957 "abort": true, 00:39:30.957 "seek_hole": false, 00:39:30.957 "seek_data": false, 00:39:30.957 "copy": true, 00:39:30.957 "nvme_iov_md": false 00:39:30.957 }, 00:39:30.957 "memory_domains": [ 00:39:30.957 { 00:39:30.957 "dma_device_id": "system", 00:39:30.957 "dma_device_type": 1 00:39:30.957 } 00:39:30.957 ], 00:39:30.957 "driver_specific": { 00:39:30.957 "nvme": [ 00:39:30.957 { 00:39:30.957 "trid": { 00:39:30.957 "trtype": "TCP", 00:39:30.957 "adrfam": "IPv4", 00:39:30.957 "traddr": "10.0.0.2", 00:39:30.957 "trsvcid": "4420", 00:39:30.957 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:30.957 }, 00:39:30.957 "ctrlr_data": { 00:39:30.957 "cntlid": 1, 00:39:30.957 "vendor_id": "0x8086", 00:39:30.957 "model_number": "SPDK bdev Controller", 00:39:30.957 "serial_number": "SPDK0", 00:39:30.957 "firmware_revision": "25.01", 00:39:30.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:30.957 "oacs": { 00:39:30.957 "security": 0, 00:39:30.957 "format": 0, 00:39:30.957 "firmware": 0, 00:39:30.957 "ns_manage": 0 00:39:30.957 }, 00:39:30.957 "multi_ctrlr": true, 00:39:30.957 "ana_reporting": false 00:39:30.957 }, 00:39:30.957 "vs": { 00:39:30.957 "nvme_version": "1.3" 00:39:30.957 }, 00:39:30.957 "ns_data": { 00:39:30.957 "id": 1, 00:39:30.957 "can_share": true 00:39:30.957 } 00:39:30.957 } 00:39:30.957 ], 00:39:30.957 "mp_policy": "active_passive" 00:39:30.957 } 00:39:30.957 } 00:39:30.957 ] 00:39:30.957 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=840467 00:39:30.957 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:30.957 12:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:30.957 Running I/O for 10 seconds... 00:39:31.891 Latency(us) 00:39:31.891 [2024-11-05T11:54:01.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:31.891 Nvme0n1 : 1.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:39:31.891 [2024-11-05T11:54:01.129Z] =================================================================================================================== 00:39:31.891 [2024-11-05T11:54:01.129Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:39:31.891 00:39:32.825 12:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:33.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:33.083 Nvme0n1 : 2.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:39:33.083 [2024-11-05T11:54:02.321Z] =================================================================================================================== 00:39:33.083 [2024-11-05T11:54:02.321Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:39:33.083 00:39:33.083 true 00:39:33.083 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:33.083 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:33.340 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:33.340 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:33.341 12:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 840467 00:39:33.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:33.909 Nvme0n1 : 3.00 15028.33 58.70 0.00 0.00 0.00 0.00 0.00 00:39:33.909 [2024-11-05T11:54:03.147Z] =================================================================================================================== 00:39:33.909 [2024-11-05T11:54:03.147Z] Total : 15028.33 58.70 0.00 0.00 0.00 0.00 0.00 00:39:33.909 00:39:34.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:34.845 Nvme0n1 : 4.00 15129.00 59.10 0.00 0.00 0.00 0.00 0.00 00:39:34.845 [2024-11-05T11:54:04.083Z] =================================================================================================================== 00:39:34.845 [2024-11-05T11:54:04.083Z] Total : 15129.00 59.10 0.00 0.00 0.00 0.00 0.00 00:39:34.845 00:39:36.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:36.223 Nvme0n1 : 5.00 15202.00 59.38 0.00 0.00 0.00 0.00 0.00 00:39:36.223 [2024-11-05T11:54:05.461Z] =================================================================================================================== 00:39:36.223 [2024-11-05T11:54:05.461Z] Total : 15202.00 59.38 0.00 0.00 0.00 0.00 0.00 00:39:36.223 00:39:37.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:37.160 Nvme0n1 : 6.00 15229.50 59.49 0.00 0.00 0.00 0.00 0.00 00:39:37.160 [2024-11-05T11:54:06.398Z] =================================================================================================================== 00:39:37.160 [2024-11-05T11:54:06.398Z] Total : 15229.50 59.49 0.00 0.00 0.00 0.00 0.00 00:39:37.160 00:39:38.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:38.095 Nvme0n1 : 7.00 15281.14 59.69 0.00 0.00 0.00 0.00 0.00 00:39:38.095 [2024-11-05T11:54:07.333Z] =================================================================================================================== 00:39:38.095 [2024-11-05T11:54:07.333Z] Total : 15281.14 59.69 0.00 0.00 0.00 0.00 0.00 00:39:38.095 00:39:39.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:39.030 Nvme0n1 : 8.00 15323.62 59.86 0.00 0.00 0.00 0.00 0.00 00:39:39.030 [2024-11-05T11:54:08.268Z] =================================================================================================================== 00:39:39.030 [2024-11-05T11:54:08.268Z] Total : 15323.62 59.86 0.00 0.00 0.00 0.00 0.00 00:39:39.030 00:39:39.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:39.963 Nvme0n1 : 9.00 15342.56 59.93 0.00 0.00 0.00 0.00 0.00 00:39:39.963 [2024-11-05T11:54:09.201Z] =================================================================================================================== 00:39:39.963 [2024-11-05T11:54:09.201Z] Total : 15342.56 59.93 0.00 0.00 0.00 0.00 0.00 00:39:39.963 00:39:40.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:40.914 Nvme0n1 : 10.00 15370.40 60.04 0.00 0.00 0.00 0.00 0.00 00:39:40.914 [2024-11-05T11:54:10.152Z] =================================================================================================================== 00:39:40.914 [2024-11-05T11:54:10.152Z] Total : 15370.40 60.04 0.00 0.00 0.00 0.00 0.00 00:39:40.914 00:39:40.914 00:39:40.914 Latency(us) 00:39:40.914 [2024-11-05T11:54:10.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:40.914 Nvme0n1 : 10.00 15363.54 60.01 0.00 0.00 8325.36 4247.70 19223.89 00:39:40.914 [2024-11-05T11:54:10.152Z] =================================================================================================================== 00:39:40.914 [2024-11-05T11:54:10.152Z] Total : 15363.54 60.01 0.00 0.00 8325.36 4247.70 19223.89 00:39:40.914 { 00:39:40.914 "results": [ 00:39:40.914 { 00:39:40.914 "job": "Nvme0n1", 00:39:40.914 "core_mask": "0x2", 00:39:40.914 "workload": "randwrite", 00:39:40.914 "status": "finished", 00:39:40.914 "queue_depth": 128, 00:39:40.914 "io_size": 4096, 00:39:40.914 "runtime": 10.004529, 00:39:40.914 "iops": 15363.541851895277, 00:39:40.914 "mibps": 60.01383535896593, 00:39:40.914 "io_failed": 0, 00:39:40.914 "io_timeout": 0, 00:39:40.914 "avg_latency_us": 8325.357213151216, 00:39:40.914 "min_latency_us": 4247.7037037037035, 00:39:40.914 "max_latency_us": 19223.893333333333 00:39:40.914 } 00:39:40.914 ], 00:39:40.914 "core_count": 1 00:39:40.914 } 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 840333 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 840333 ']' 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 840333 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 840333 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 840333' 00:39:40.914 killing process with pid 840333 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 840333 00:39:40.914 Received shutdown signal, test time was about 10.000000 seconds 00:39:40.914 00:39:40.914 Latency(us) 00:39:40.914 [2024-11-05T11:54:10.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.914 [2024-11-05T11:54:10.152Z] =================================================================================================================== 00:39:40.914 [2024-11-05T11:54:10.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:40.914 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 840333 00:39:41.172 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:41.432 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:41.726 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:41.726 12:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 837852 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 837852 00:39:42.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 837852 Killed "${NVMF_APP[@]}" "$@" 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=842398 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 842398 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 842398 ']' 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:42.011 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:42.011 [2024-11-05 12:54:11.241070] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:42.012 [2024-11-05 12:54:11.242234] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:42.012 [2024-11-05 12:54:11.242312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:42.272 [2024-11-05 12:54:11.318289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:42.272 [2024-11-05 12:54:11.365137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:42.272 [2024-11-05 12:54:11.365209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:42.272 [2024-11-05 12:54:11.365238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:42.272 [2024-11-05 12:54:11.365251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:42.272 [2024-11-05 12:54:11.365260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:42.272 [2024-11-05 12:54:11.365867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.272 [2024-11-05 12:54:11.458224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:42.272 [2024-11-05 12:54:11.458550] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:42.272 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:42.532 [2024-11-05 12:54:11.760685] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:42.532 [2024-11-05 12:54:11.760847] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:42.532 [2024-11-05 12:54:11.760923] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:39:42.793 12:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:43.052 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 05b449d1-7073-451e-a094-c3fe1a93efe0 -t 2000 00:39:43.312 [ 00:39:43.312 { 00:39:43.312 "name": "05b449d1-7073-451e-a094-c3fe1a93efe0", 00:39:43.312 "aliases": [ 00:39:43.312 "lvs/lvol" 00:39:43.312 ], 00:39:43.312 "product_name": "Logical Volume", 00:39:43.312 "block_size": 4096, 00:39:43.312 "num_blocks": 38912, 00:39:43.312 "uuid": "05b449d1-7073-451e-a094-c3fe1a93efe0", 00:39:43.312 "assigned_rate_limits": { 00:39:43.312 "rw_ios_per_sec": 0, 00:39:43.312 "rw_mbytes_per_sec": 0, 00:39:43.312 "r_mbytes_per_sec": 0, 00:39:43.312 "w_mbytes_per_sec": 0 00:39:43.312 }, 00:39:43.312 "claimed": false, 00:39:43.312 "zoned": false, 00:39:43.312 "supported_io_types": { 00:39:43.312 "read": true, 00:39:43.312 "write": true, 00:39:43.312 "unmap": true, 00:39:43.312 "flush": false, 00:39:43.312 "reset": true, 00:39:43.312 "nvme_admin": false, 00:39:43.312 "nvme_io": false, 00:39:43.312 "nvme_io_md": false, 00:39:43.312 "write_zeroes": true, 00:39:43.312 "zcopy": false, 00:39:43.312 "get_zone_info": false, 00:39:43.312 "zone_management": false, 00:39:43.312 "zone_append": false, 00:39:43.312 "compare": false, 00:39:43.312 "compare_and_write": false, 00:39:43.312 "abort": false, 00:39:43.312 "seek_hole": true, 00:39:43.312 "seek_data": true, 00:39:43.312 "copy": false, 00:39:43.312 "nvme_iov_md": false 00:39:43.312 }, 00:39:43.312 "driver_specific": { 00:39:43.312 "lvol": { 00:39:43.312 "lvol_store_uuid": "e3ef5efd-0573-4cf3-b685-781b5b595dc8", 00:39:43.312 "base_bdev": "aio_bdev", 00:39:43.312 "thin_provision": false, 00:39:43.312 "num_allocated_clusters": 38, 00:39:43.312 "snapshot": false, 00:39:43.312 "clone": false, 00:39:43.312 "esnap_clone": false 00:39:43.312 } 00:39:43.312 } 00:39:43.312 } 00:39:43.312 ] 00:39:43.312 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:39:43.312 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:43.312 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:43.571 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:43.571 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:43.571 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:43.830 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:43.830 12:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:44.089 [2024-11-05 12:54:13.158449] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:44.089 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:44.349 request: 00:39:44.349 { 00:39:44.349 "uuid": "e3ef5efd-0573-4cf3-b685-781b5b595dc8", 00:39:44.349 "method": "bdev_lvol_get_lvstores", 00:39:44.349 "req_id": 1 00:39:44.349 } 00:39:44.349 Got JSON-RPC error response 00:39:44.349 response: 00:39:44.349 { 00:39:44.349 "code": -19, 00:39:44.349 "message": "No such device" 00:39:44.349 } 00:39:44.349 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:39:44.349 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:44.349 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:44.349 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:44.349 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:44.609 aio_bdev 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:39:44.609 12:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:44.868 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 05b449d1-7073-451e-a094-c3fe1a93efe0 -t 2000 00:39:45.126 [ 00:39:45.126 { 00:39:45.126 "name": "05b449d1-7073-451e-a094-c3fe1a93efe0", 00:39:45.126 "aliases": [ 00:39:45.126 "lvs/lvol" 00:39:45.126 ], 00:39:45.126 "product_name": "Logical Volume", 00:39:45.126 "block_size": 4096, 00:39:45.126 "num_blocks": 38912, 00:39:45.126 "uuid": "05b449d1-7073-451e-a094-c3fe1a93efe0", 00:39:45.126 "assigned_rate_limits": { 00:39:45.126 "rw_ios_per_sec": 0, 00:39:45.126 "rw_mbytes_per_sec": 0, 00:39:45.126 "r_mbytes_per_sec": 0, 00:39:45.126 "w_mbytes_per_sec": 0 00:39:45.126 }, 00:39:45.126 "claimed": false, 00:39:45.126 "zoned": false, 00:39:45.126 "supported_io_types": { 00:39:45.126 "read": true, 00:39:45.126 "write": true, 00:39:45.126 "unmap": true, 00:39:45.126 "flush": false, 00:39:45.126 "reset": true, 00:39:45.126 "nvme_admin": false, 00:39:45.126 "nvme_io": false, 00:39:45.126 "nvme_io_md": false, 00:39:45.126 "write_zeroes": true, 00:39:45.126 "zcopy": false, 00:39:45.126 "get_zone_info": false, 00:39:45.126 "zone_management": false, 00:39:45.126 "zone_append": false, 00:39:45.126 "compare": false, 00:39:45.126 "compare_and_write": false, 00:39:45.126 "abort": false, 00:39:45.126 "seek_hole": true, 00:39:45.126 "seek_data": true, 00:39:45.126 "copy": false, 00:39:45.126 "nvme_iov_md": false 00:39:45.126 }, 00:39:45.126 "driver_specific": { 00:39:45.126 "lvol": { 00:39:45.126 "lvol_store_uuid": "e3ef5efd-0573-4cf3-b685-781b5b595dc8", 00:39:45.126 "base_bdev": "aio_bdev", 00:39:45.126 "thin_provision": false, 00:39:45.126 "num_allocated_clusters": 38, 00:39:45.126 "snapshot": false, 00:39:45.126 "clone": false, 00:39:45.126 "esnap_clone": false 00:39:45.126 } 00:39:45.126 } 00:39:45.126 } 00:39:45.126 ] 00:39:45.126 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:39:45.126 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:45.126 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:45.384 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:45.384 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:45.384 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:45.642 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:45.642 12:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05b449d1-7073-451e-a094-c3fe1a93efe0 00:39:45.900 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3ef5efd-0573-4cf3-b685-781b5b595dc8 00:39:46.158 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:46.418 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:46.678 00:39:46.678 real 0m19.511s 00:39:46.678 user 0m36.645s 00:39:46.678 sys 0m4.659s 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:46.678 ************************************ 00:39:46.678 END TEST lvs_grow_dirty 00:39:46.678 ************************************ 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:46.678 nvmf_trace.0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:46.678 rmmod nvme_tcp 00:39:46.678 rmmod nvme_fabrics 00:39:46.678 rmmod nvme_keyring 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 842398 ']' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 842398 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 842398 ']' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 842398 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 842398 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 842398' 00:39:46.678 killing process with pid 842398 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 842398 00:39:46.678 12:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 842398 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:46.937 12:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:49.468 00:39:49.468 real 0m42.873s 00:39:49.468 user 0m55.826s 00:39:49.468 sys 0m8.496s 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:49.468 ************************************ 00:39:49.468 END TEST nvmf_lvs_grow 00:39:49.468 ************************************ 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:49.468 ************************************ 00:39:49.468 START TEST nvmf_bdev_io_wait 00:39:49.468 ************************************ 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:49.468 * Looking for test storage... 00:39:49.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:49.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.468 --rc genhtml_branch_coverage=1 00:39:49.468 --rc genhtml_function_coverage=1 00:39:49.468 --rc genhtml_legend=1 00:39:49.468 --rc geninfo_all_blocks=1 00:39:49.468 --rc geninfo_unexecuted_blocks=1 00:39:49.468 00:39:49.468 ' 00:39:49.468 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:49.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.468 --rc genhtml_branch_coverage=1 00:39:49.468 --rc genhtml_function_coverage=1 00:39:49.468 --rc genhtml_legend=1 00:39:49.469 --rc geninfo_all_blocks=1 00:39:49.469 --rc geninfo_unexecuted_blocks=1 00:39:49.469 00:39:49.469 ' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:49.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.469 --rc genhtml_branch_coverage=1 00:39:49.469 --rc genhtml_function_coverage=1 00:39:49.469 --rc genhtml_legend=1 00:39:49.469 --rc geninfo_all_blocks=1 00:39:49.469 --rc geninfo_unexecuted_blocks=1 00:39:49.469 00:39:49.469 ' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:49.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.469 --rc genhtml_branch_coverage=1 00:39:49.469 --rc genhtml_function_coverage=1 00:39:49.469 --rc genhtml_legend=1 00:39:49.469 --rc geninfo_all_blocks=1 00:39:49.469 --rc geninfo_unexecuted_blocks=1 00:39:49.469 00:39:49.469 ' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:49.469 12:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:51.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:51.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:51.371 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:51.372 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:51.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:51.372 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:51.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:51.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:39:51.631 00:39:51.631 --- 10.0.0.2 ping statistics --- 00:39:51.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.631 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:51.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:51.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:39:51.631 00:39:51.631 --- 10.0.0.1 ping statistics --- 00:39:51.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:51.631 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=844918 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 844918 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 844918 ']' 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:51.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:51.631 12:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.631 [2024-11-05 12:54:20.776231] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:51.631 [2024-11-05 12:54:20.777443] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:51.631 [2024-11-05 12:54:20.777522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:51.631 [2024-11-05 12:54:20.854463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:51.890 [2024-11-05 12:54:20.905463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:51.890 [2024-11-05 12:54:20.905544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:51.890 [2024-11-05 12:54:20.905558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:51.890 [2024-11-05 12:54:20.905569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:51.890 [2024-11-05 12:54:20.905594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:51.890 [2024-11-05 12:54:20.907301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.890 [2024-11-05 12:54:20.907363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:51.890 [2024-11-05 12:54:20.907428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:51.890 [2024-11-05 12:54:20.907433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.890 [2024-11-05 12:54:20.907946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.890 [2024-11-05 12:54:21.106994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:51.890 [2024-11-05 12:54:21.107185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:51.890 [2024-11-05 12:54:21.108034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:51.890 [2024-11-05 12:54:21.108790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:51.890 [2024-11-05 12:54:21.116120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.890 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:52.149 Malloc0 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:52.149 [2024-11-05 12:54:21.172363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=845066 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=845068 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:52.149 { 00:39:52.149 "params": { 00:39:52.149 "name": "Nvme$subsystem", 00:39:52.149 "trtype": "$TEST_TRANSPORT", 00:39:52.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.149 "adrfam": "ipv4", 00:39:52.149 "trsvcid": "$NVMF_PORT", 00:39:52.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.149 "hdgst": ${hdgst:-false}, 00:39:52.149 "ddgst": ${ddgst:-false} 00:39:52.149 }, 00:39:52.149 "method": "bdev_nvme_attach_controller" 00:39:52.149 } 00:39:52.149 EOF 00:39:52.149 )") 00:39:52.149 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=845070 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:52.150 { 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme$subsystem", 00:39:52.150 "trtype": "$TEST_TRANSPORT", 00:39:52.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "$NVMF_PORT", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.150 "hdgst": ${hdgst:-false}, 00:39:52.150 "ddgst": ${ddgst:-false} 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 } 00:39:52.150 EOF 00:39:52.150 )") 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=845073 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:52.150 { 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme$subsystem", 00:39:52.150 "trtype": "$TEST_TRANSPORT", 00:39:52.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "$NVMF_PORT", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.150 "hdgst": ${hdgst:-false}, 00:39:52.150 "ddgst": ${ddgst:-false} 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 } 00:39:52.150 EOF 00:39:52.150 )") 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:52.150 { 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme$subsystem", 00:39:52.150 "trtype": "$TEST_TRANSPORT", 00:39:52.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "$NVMF_PORT", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:52.150 "hdgst": ${hdgst:-false}, 00:39:52.150 "ddgst": ${ddgst:-false} 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 } 00:39:52.150 EOF 00:39:52.150 )") 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 845066 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme1", 00:39:52.150 "trtype": "tcp", 00:39:52.150 "traddr": "10.0.0.2", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "4420", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:52.150 "hdgst": false, 00:39:52.150 "ddgst": false 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 }' 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme1", 00:39:52.150 "trtype": "tcp", 00:39:52.150 "traddr": "10.0.0.2", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "4420", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:52.150 "hdgst": false, 00:39:52.150 "ddgst": false 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 }' 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme1", 00:39:52.150 "trtype": "tcp", 00:39:52.150 "traddr": "10.0.0.2", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "4420", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:52.150 "hdgst": false, 00:39:52.150 "ddgst": false 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 }' 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:52.150 12:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:52.150 "params": { 00:39:52.150 "name": "Nvme1", 00:39:52.150 "trtype": "tcp", 00:39:52.150 "traddr": "10.0.0.2", 00:39:52.150 "adrfam": "ipv4", 00:39:52.150 "trsvcid": "4420", 00:39:52.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:52.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:52.150 "hdgst": false, 00:39:52.150 "ddgst": false 00:39:52.150 }, 00:39:52.150 "method": "bdev_nvme_attach_controller" 00:39:52.150 }' 00:39:52.150 [2024-11-05 12:54:21.225346] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:52.150 [2024-11-05 12:54:21.225341] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:52.150 [2024-11-05 12:54:21.225344] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:52.150 [2024-11-05 12:54:21.225361] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:52.150 [2024-11-05 12:54:21.225425] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-05 12:54:21.225424] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-05 12:54:21.225424] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:52.150 [2024-11-05 12:54:21.225443] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:39:52.150 --proc-type=auto ] 00:39:52.150 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:52.408 [2024-11-05 12:54:21.409551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.408 [2024-11-05 12:54:21.451086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:52.408 [2024-11-05 12:54:21.507950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.408 [2024-11-05 12:54:21.549455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:52.408 [2024-11-05 12:54:21.604991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.409 [2024-11-05 12:54:21.644663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:52.666 [2024-11-05 12:54:21.671698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.666 [2024-11-05 12:54:21.709130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:52.666 Running I/O for 1 seconds... 00:39:52.666 Running I/O for 1 seconds... 00:39:52.666 Running I/O for 1 seconds... 00:39:52.924 Running I/O for 1 seconds... 00:39:53.860 11723.00 IOPS, 45.79 MiB/s 00:39:53.860 Latency(us) 00:39:53.860 [2024-11-05T11:54:23.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.861 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:53.861 Nvme1n1 : 1.01 11765.62 45.96 0.00 0.00 10836.19 4247.70 13689.74 00:39:53.861 [2024-11-05T11:54:23.099Z] =================================================================================================================== 00:39:53.861 [2024-11-05T11:54:23.099Z] Total : 11765.62 45.96 0.00 0.00 10836.19 4247.70 13689.74 00:39:53.861 5813.00 IOPS, 22.71 MiB/s 00:39:53.861 Latency(us) 00:39:53.861 [2024-11-05T11:54:23.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.861 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:53.861 Nvme1n1 : 1.03 5793.66 22.63 0.00 0.00 21791.46 5801.15 34952.53 00:39:53.861 [2024-11-05T11:54:23.099Z] =================================================================================================================== 00:39:53.861 [2024-11-05T11:54:23.099Z] Total : 5793.66 22.63 0.00 0.00 21791.46 5801.15 34952.53 00:39:53.861 162552.00 IOPS, 634.97 MiB/s 00:39:53.861 Latency(us) 00:39:53.861 [2024-11-05T11:54:23.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.861 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:53.861 Nvme1n1 : 1.00 162242.33 633.76 0.00 0.00 784.71 307.96 1893.26 00:39:53.861 [2024-11-05T11:54:23.099Z] =================================================================================================================== 00:39:53.861 [2024-11-05T11:54:23.099Z] Total : 162242.33 633.76 0.00 0.00 784.71 307.96 1893.26 00:39:53.861 6079.00 IOPS, 23.75 MiB/s 00:39:53.861 Latency(us) 00:39:53.861 [2024-11-05T11:54:23.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:53.861 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:53.861 Nvme1n1 : 1.01 6183.33 24.15 0.00 0.00 20638.75 4053.52 44273.21 00:39:53.861 [2024-11-05T11:54:23.099Z] =================================================================================================================== 00:39:53.861 [2024-11-05T11:54:23.099Z] Total : 6183.33 24.15 0.00 0.00 20638.75 4053.52 44273.21 00:39:53.861 12:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 845068 00:39:53.861 12:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 845070 00:39:53.861 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 845073 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:54.119 rmmod nvme_tcp 00:39:54.119 rmmod nvme_fabrics 00:39:54.119 rmmod nvme_keyring 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 844918 ']' 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 844918 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 844918 ']' 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 844918 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 844918 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 844918' 00:39:54.119 killing process with pid 844918 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 844918 00:39:54.119 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 844918 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.379 12:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:56.287 00:39:56.287 real 0m7.284s 00:39:56.287 user 0m13.813s 00:39:56.287 sys 0m4.066s 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:56.287 ************************************ 00:39:56.287 END TEST nvmf_bdev_io_wait 00:39:56.287 ************************************ 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:56.287 ************************************ 00:39:56.287 START TEST nvmf_queue_depth 00:39:56.287 ************************************ 00:39:56.287 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:56.547 * Looking for test storage... 00:39:56.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:56.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.547 --rc genhtml_branch_coverage=1 00:39:56.547 --rc genhtml_function_coverage=1 00:39:56.547 --rc genhtml_legend=1 00:39:56.547 --rc geninfo_all_blocks=1 00:39:56.547 --rc geninfo_unexecuted_blocks=1 00:39:56.547 00:39:56.547 ' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:56.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.547 --rc genhtml_branch_coverage=1 00:39:56.547 --rc genhtml_function_coverage=1 00:39:56.547 --rc genhtml_legend=1 00:39:56.547 --rc geninfo_all_blocks=1 00:39:56.547 --rc geninfo_unexecuted_blocks=1 00:39:56.547 00:39:56.547 ' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:56.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.547 --rc genhtml_branch_coverage=1 00:39:56.547 --rc genhtml_function_coverage=1 00:39:56.547 --rc genhtml_legend=1 00:39:56.547 --rc geninfo_all_blocks=1 00:39:56.547 --rc geninfo_unexecuted_blocks=1 00:39:56.547 00:39:56.547 ' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:56.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.547 --rc genhtml_branch_coverage=1 00:39:56.547 --rc genhtml_function_coverage=1 00:39:56.547 --rc genhtml_legend=1 00:39:56.547 --rc geninfo_all_blocks=1 00:39:56.547 --rc geninfo_unexecuted_blocks=1 00:39:56.547 00:39:56.547 ' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:56.547 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:56.548 12:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:59.082 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:59.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:59.083 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:59.083 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:59.083 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:59.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:59.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:39:59.083 00:39:59.083 --- 10.0.0.2 ping statistics --- 00:39:59.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.083 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:59.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:59.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:39:59.083 00:39:59.083 --- 10.0.0.1 ping statistics --- 00:39:59.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.083 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:59.083 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=847241 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 847241 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 847241 ']' 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:59.084 12:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.084 [2024-11-05 12:54:28.001494] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:59.084 [2024-11-05 12:54:28.002771] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:59.084 [2024-11-05 12:54:28.002832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:59.084 [2024-11-05 12:54:28.086647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.084 [2024-11-05 12:54:28.133872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:59.084 [2024-11-05 12:54:28.133944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:59.084 [2024-11-05 12:54:28.133974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:59.084 [2024-11-05 12:54:28.133986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:59.084 [2024-11-05 12:54:28.133996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:59.084 [2024-11-05 12:54:28.134576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.084 [2024-11-05 12:54:28.218542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:59.084 [2024-11-05 12:54:28.218857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.084 [2024-11-05 12:54:28.263216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.084 Malloc0 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.084 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.342 [2024-11-05 12:54:28.327389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=847311 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 847311 /var/tmp/bdevperf.sock 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 847311 ']' 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:59.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:59.342 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.343 [2024-11-05 12:54:28.374205] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:39:59.343 [2024-11-05 12:54:28.374282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847311 ] 00:39:59.343 [2024-11-05 12:54:28.439105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.343 [2024-11-05 12:54:28.483919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:59.600 NVMe0n1 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.600 12:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:59.859 Running I/O for 10 seconds... 00:40:01.731 8192.00 IOPS, 32.00 MiB/s [2024-11-05T11:54:31.903Z] 8199.00 IOPS, 32.03 MiB/s [2024-11-05T11:54:33.283Z] 8498.33 IOPS, 33.20 MiB/s [2024-11-05T11:54:34.220Z] 8455.00 IOPS, 33.03 MiB/s [2024-11-05T11:54:35.159Z] 8578.00 IOPS, 33.51 MiB/s [2024-11-05T11:54:36.094Z] 8568.00 IOPS, 33.47 MiB/s [2024-11-05T11:54:37.031Z] 8629.57 IOPS, 33.71 MiB/s [2024-11-05T11:54:37.967Z] 8647.12 IOPS, 33.78 MiB/s [2024-11-05T11:54:38.903Z] 8648.44 IOPS, 33.78 MiB/s [2024-11-05T11:54:39.161Z] 8695.90 IOPS, 33.97 MiB/s 00:40:09.923 Latency(us) 00:40:09.923 [2024-11-05T11:54:39.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.923 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:09.923 Verification LBA range: start 0x0 length 0x4000 00:40:09.923 NVMe0n1 : 10.14 8673.64 33.88 0.00 0.00 117124.06 21456.97 88158.06 00:40:09.923 [2024-11-05T11:54:39.161Z] =================================================================================================================== 00:40:09.923 [2024-11-05T11:54:39.161Z] Total : 8673.64 33.88 0.00 0.00 117124.06 21456.97 88158.06 00:40:09.923 { 00:40:09.923 "results": [ 00:40:09.923 { 00:40:09.923 "job": "NVMe0n1", 00:40:09.923 "core_mask": "0x1", 00:40:09.923 "workload": "verify", 00:40:09.923 "status": "finished", 00:40:09.923 "verify_range": { 00:40:09.923 "start": 0, 00:40:09.923 "length": 16384 00:40:09.923 }, 00:40:09.923 "queue_depth": 1024, 00:40:09.923 "io_size": 4096, 00:40:09.923 "runtime": 10.142911, 00:40:09.923 "iops": 8673.644085016618, 00:40:09.923 "mibps": 33.881422207096165, 00:40:09.923 "io_failed": 0, 00:40:09.923 "io_timeout": 0, 00:40:09.924 "avg_latency_us": 117124.05716360355, 00:40:09.924 "min_latency_us": 21456.971851851853, 00:40:09.924 "max_latency_us": 88158.0562962963 00:40:09.924 } 00:40:09.924 ], 00:40:09.924 "core_count": 1 00:40:09.924 } 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 847311 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 847311 ']' 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 847311 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 847311 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 847311' 00:40:09.924 killing process with pid 847311 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 847311 00:40:09.924 Received shutdown signal, test time was about 10.000000 seconds 00:40:09.924 00:40:09.924 Latency(us) 00:40:09.924 [2024-11-05T11:54:39.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.924 [2024-11-05T11:54:39.162Z] =================================================================================================================== 00:40:09.924 [2024-11-05T11:54:39.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:09.924 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 847311 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:10.183 rmmod nvme_tcp 00:40:10.183 rmmod nvme_fabrics 00:40:10.183 rmmod nvme_keyring 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 847241 ']' 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 847241 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 847241 ']' 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 847241 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 847241 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 847241' 00:40:10.183 killing process with pid 847241 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 847241 00:40:10.183 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 847241 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:10.443 12:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:12.984 00:40:12.984 real 0m16.173s 00:40:12.984 user 0m22.278s 00:40:12.984 sys 0m3.406s 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:12.984 ************************************ 00:40:12.984 END TEST nvmf_queue_depth 00:40:12.984 ************************************ 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:12.984 ************************************ 00:40:12.984 START TEST nvmf_target_multipath 00:40:12.984 ************************************ 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:12.984 * Looking for test storage... 00:40:12.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.984 --rc genhtml_branch_coverage=1 00:40:12.984 --rc genhtml_function_coverage=1 00:40:12.984 --rc genhtml_legend=1 00:40:12.984 --rc geninfo_all_blocks=1 00:40:12.984 --rc geninfo_unexecuted_blocks=1 00:40:12.984 00:40:12.984 ' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.984 --rc genhtml_branch_coverage=1 00:40:12.984 --rc genhtml_function_coverage=1 00:40:12.984 --rc genhtml_legend=1 00:40:12.984 --rc geninfo_all_blocks=1 00:40:12.984 --rc geninfo_unexecuted_blocks=1 00:40:12.984 00:40:12.984 ' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.984 --rc genhtml_branch_coverage=1 00:40:12.984 --rc genhtml_function_coverage=1 00:40:12.984 --rc genhtml_legend=1 00:40:12.984 --rc geninfo_all_blocks=1 00:40:12.984 --rc geninfo_unexecuted_blocks=1 00:40:12.984 00:40:12.984 ' 00:40:12.984 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:12.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.984 --rc genhtml_branch_coverage=1 00:40:12.984 --rc genhtml_function_coverage=1 00:40:12.984 --rc genhtml_legend=1 00:40:12.984 --rc geninfo_all_blocks=1 00:40:12.984 --rc geninfo_unexecuted_blocks=1 00:40:12.984 00:40:12.985 ' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:12.985 12:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:14.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:14.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:14.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:14.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:14.888 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:14.889 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:15.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:40:15.148 00:40:15.148 --- 10.0.0.2 ping statistics --- 00:40:15.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.148 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:15.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:40:15.148 00:40:15.148 --- 10.0.0.1 ping statistics --- 00:40:15.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.148 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:15.148 only one NIC for nvmf test 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:15.148 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:15.149 rmmod nvme_tcp 00:40:15.149 rmmod nvme_fabrics 00:40:15.149 rmmod nvme_keyring 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.149 12:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:17.117 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.118 00:40:17.118 real 0m4.587s 00:40:17.118 user 0m0.946s 00:40:17.118 sys 0m1.661s 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:17.118 ************************************ 00:40:17.118 END TEST nvmf_target_multipath 00:40:17.118 ************************************ 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:17.118 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:17.377 ************************************ 00:40:17.377 START TEST nvmf_zcopy 00:40:17.377 ************************************ 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:17.377 * Looking for test storage... 00:40:17.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:17.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.377 --rc genhtml_branch_coverage=1 00:40:17.377 --rc genhtml_function_coverage=1 00:40:17.377 --rc genhtml_legend=1 00:40:17.377 --rc geninfo_all_blocks=1 00:40:17.377 --rc geninfo_unexecuted_blocks=1 00:40:17.377 00:40:17.377 ' 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:17.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.377 --rc genhtml_branch_coverage=1 00:40:17.377 --rc genhtml_function_coverage=1 00:40:17.377 --rc genhtml_legend=1 00:40:17.377 --rc geninfo_all_blocks=1 00:40:17.377 --rc geninfo_unexecuted_blocks=1 00:40:17.377 00:40:17.377 ' 00:40:17.377 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:17.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.377 --rc genhtml_branch_coverage=1 00:40:17.378 --rc genhtml_function_coverage=1 00:40:17.378 --rc genhtml_legend=1 00:40:17.378 --rc geninfo_all_blocks=1 00:40:17.378 --rc geninfo_unexecuted_blocks=1 00:40:17.378 00:40:17.378 ' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:17.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.378 --rc genhtml_branch_coverage=1 00:40:17.378 --rc genhtml_function_coverage=1 00:40:17.378 --rc genhtml_legend=1 00:40:17.378 --rc geninfo_all_blocks=1 00:40:17.378 --rc geninfo_unexecuted_blocks=1 00:40:17.378 00:40:17.378 ' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:17.378 12:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:19.911 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:19.911 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.911 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:19.912 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:19.912 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:19.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:40:19.912 00:40:19.912 --- 10.0.0.2 ping statistics --- 00:40:19.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.912 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:40:19.912 00:40:19.912 --- 10.0.0.1 ping statistics --- 00:40:19.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.912 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=852484 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 852484 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 852484 ']' 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:19.912 12:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:19.912 [2024-11-05 12:54:48.916810] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:19.912 [2024-11-05 12:54:48.917976] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:40:19.912 [2024-11-05 12:54:48.918057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.912 [2024-11-05 12:54:48.998568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.912 [2024-11-05 12:54:49.046521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.912 [2024-11-05 12:54:49.046582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.912 [2024-11-05 12:54:49.046611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.912 [2024-11-05 12:54:49.046623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.912 [2024-11-05 12:54:49.046634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.912 [2024-11-05 12:54:49.047273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.912 [2024-11-05 12:54:49.141687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:19.912 [2024-11-05 12:54:49.142030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 [2024-11-05 12:54:49.195894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 [2024-11-05 12:54:49.212108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 malloc0 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:20.171 { 00:40:20.171 "params": { 00:40:20.171 "name": "Nvme$subsystem", 00:40:20.171 "trtype": "$TEST_TRANSPORT", 00:40:20.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:20.171 "adrfam": "ipv4", 00:40:20.171 "trsvcid": "$NVMF_PORT", 00:40:20.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:20.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:20.171 "hdgst": ${hdgst:-false}, 00:40:20.171 "ddgst": ${ddgst:-false} 00:40:20.171 }, 00:40:20.171 "method": "bdev_nvme_attach_controller" 00:40:20.171 } 00:40:20.171 EOF 00:40:20.171 )") 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:20.171 12:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:20.171 "params": { 00:40:20.171 "name": "Nvme1", 00:40:20.171 "trtype": "tcp", 00:40:20.171 "traddr": "10.0.0.2", 00:40:20.171 "adrfam": "ipv4", 00:40:20.171 "trsvcid": "4420", 00:40:20.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:20.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:20.171 "hdgst": false, 00:40:20.171 "ddgst": false 00:40:20.171 }, 00:40:20.171 "method": "bdev_nvme_attach_controller" 00:40:20.171 }' 00:40:20.171 [2024-11-05 12:54:49.296470] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:40:20.171 [2024-11-05 12:54:49.296538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852508 ] 00:40:20.171 [2024-11-05 12:54:49.363050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.429 [2024-11-05 12:54:49.412372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:20.429 Running I/O for 10 seconds... 00:40:22.732 4960.00 IOPS, 38.75 MiB/s [2024-11-05T11:54:52.903Z] 5029.50 IOPS, 39.29 MiB/s [2024-11-05T11:54:53.836Z] 5033.67 IOPS, 39.33 MiB/s [2024-11-05T11:54:54.770Z] 5064.50 IOPS, 39.57 MiB/s [2024-11-05T11:54:55.703Z] 5071.20 IOPS, 39.62 MiB/s [2024-11-05T11:54:57.076Z] 5075.67 IOPS, 39.65 MiB/s [2024-11-05T11:54:58.009Z] 5078.29 IOPS, 39.67 MiB/s [2024-11-05T11:54:58.943Z] 5075.75 IOPS, 39.65 MiB/s [2024-11-05T11:54:59.876Z] 5082.89 IOPS, 39.71 MiB/s [2024-11-05T11:54:59.876Z] 5088.30 IOPS, 39.75 MiB/s 00:40:30.638 Latency(us) 00:40:30.638 [2024-11-05T11:54:59.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:30.638 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:30.638 Verification LBA range: start 0x0 length 0x1000 00:40:30.638 Nvme1n1 : 10.06 5071.58 39.62 0.00 0.00 25080.72 3398.16 44273.21 00:40:30.638 [2024-11-05T11:54:59.876Z] =================================================================================================================== 00:40:30.638 [2024-11-05T11:54:59.876Z] Total : 5071.58 39.62 0.00 0.00 25080.72 3398.16 44273.21 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=853705 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:30.896 { 00:40:30.896 "params": { 00:40:30.896 "name": "Nvme$subsystem", 00:40:30.896 "trtype": "$TEST_TRANSPORT", 00:40:30.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:30.896 "adrfam": "ipv4", 00:40:30.896 "trsvcid": "$NVMF_PORT", 00:40:30.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:30.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:30.896 "hdgst": ${hdgst:-false}, 00:40:30.896 "ddgst": ${ddgst:-false} 00:40:30.896 }, 00:40:30.896 "method": "bdev_nvme_attach_controller" 00:40:30.896 } 00:40:30.896 EOF 00:40:30.896 )") 00:40:30.896 [2024-11-05 12:54:59.891852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.891928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:30.896 12:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:30.896 "params": { 00:40:30.896 "name": "Nvme1", 00:40:30.896 "trtype": "tcp", 00:40:30.896 "traddr": "10.0.0.2", 00:40:30.896 "adrfam": "ipv4", 00:40:30.896 "trsvcid": "4420", 00:40:30.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:30.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:30.896 "hdgst": false, 00:40:30.896 "ddgst": false 00:40:30.896 }, 00:40:30.896 "method": "bdev_nvme_attach_controller" 00:40:30.896 }' 00:40:30.896 [2024-11-05 12:54:59.899769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.899801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.907774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.907795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.915771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.915791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.923769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.923788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.931768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.931787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.932936] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:40:30.896 [2024-11-05 12:54:59.933011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853705 ] 00:40:30.896 [2024-11-05 12:54:59.939767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.939787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.947767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.947785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.955767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.955786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.963783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.963801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.971767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.971785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.979767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.979786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.987767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.987786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:54:59.995766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:54:59.995785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.001521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.896 [2024-11-05 12:55:00.003809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.003838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.011889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.011957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.019881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.019951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.027812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.027845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.035815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.035896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.043810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.043866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.051788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.051817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.059356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.896 [2024-11-05 12:55:00.059795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.059817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.067777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.067805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.075800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.075836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.083807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.083871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.091801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.091834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.896 [2024-11-05 12:55:00.099804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.896 [2024-11-05 12:55:00.099857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.897 [2024-11-05 12:55:00.107807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.897 [2024-11-05 12:55:00.107855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.897 [2024-11-05 12:55:00.115805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.897 [2024-11-05 12:55:00.115854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.897 [2024-11-05 12:55:00.123781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.897 [2024-11-05 12:55:00.123808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.897 [2024-11-05 12:55:00.131816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.897 [2024-11-05 12:55:00.131872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.139854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.139918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.147804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.147858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.155774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.155797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.163773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.163797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.171793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.171817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.179791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.179816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.187774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.187797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.195774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.195813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.203773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.203795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.211773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.211794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.219772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.219794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.227772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.227794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.235773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.235794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.243778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.243801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.251775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.251798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.259776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.259800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.267780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.267805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.275798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.275827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 Running I/O for 5 seconds... 00:40:31.156 [2024-11-05 12:55:00.293586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.293615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.308607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.308634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.318615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.318642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.332892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.332920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.342687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.342712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.356053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.356094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.366129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.366157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.156 [2024-11-05 12:55:00.381848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.156 [2024-11-05 12:55:00.381884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.398088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.398116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.407982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.408009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.420259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.420284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.431065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.431093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.441803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.441828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.456712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.456737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.466255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.466281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.482199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.482224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.497498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.497526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.507481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.507508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.414 [2024-11-05 12:55:00.519614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.414 [2024-11-05 12:55:00.519639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.530459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.530485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.543877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.543905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.553590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.553615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.565466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.565491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.580826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.580853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.590182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.590222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.605336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.605363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.615092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.615120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.626936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.626963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.638040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.638066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.415 [2024-11-05 12:55:00.653601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.415 [2024-11-05 12:55:00.653629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.672 [2024-11-05 12:55:00.663257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.672 [2024-11-05 12:55:00.663298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.672 [2024-11-05 12:55:00.675414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.672 [2024-11-05 12:55:00.675459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.672 [2024-11-05 12:55:00.686497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.672 [2024-11-05 12:55:00.686523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.672 [2024-11-05 12:55:00.702967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.702994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.717404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.717431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.727293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.727333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.739244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.739270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.750060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.750085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.764307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.764335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.773795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.773820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.785503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.785528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.800783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.800810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.810632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.810656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.824693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.824717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.834191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.834229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.848019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.848047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.857111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.857151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.868994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.869020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.879711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.879743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.890960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.890988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.903768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.903796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.673 [2024-11-05 12:55:00.913085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.673 [2024-11-05 12:55:00.913113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.924697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.924723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.935172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.935197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.949227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.949252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.958602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.958626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.972117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.972157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.981955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.981982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:00.993573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:00.993599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.007994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.008020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.017634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.017658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.029403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.029427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.040281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.040320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.050565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.050590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.065682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.065709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.074640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.074679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.086237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.086278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.101952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.101987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.119764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.119790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.129463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.129488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.141352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.141376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.157157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.157183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.931 [2024-11-05 12:55:01.166159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.931 [2024-11-05 12:55:01.166184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.180125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.180170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.190542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.190569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.204677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.204703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.214749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.214776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.228689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.228715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.238612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.238649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.252483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.252523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.262214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.262238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.277243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.277269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 11573.00 IOPS, 90.41 MiB/s [2024-11-05T11:55:01.427Z] [2024-11-05 12:55:01.286907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.286933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.300640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.300679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.311400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.311424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.322250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.322289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.338994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.339030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.189 [2024-11-05 12:55:01.353530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.189 [2024-11-05 12:55:01.353557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.190 [2024-11-05 12:55:01.362838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.190 [2024-11-05 12:55:01.362885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.190 [2024-11-05 12:55:01.378252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.190 [2024-11-05 12:55:01.378292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.190 [2024-11-05 12:55:01.393715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.190 [2024-11-05 12:55:01.393755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.190 [2024-11-05 12:55:01.403087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.190 [2024-11-05 12:55:01.403113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.190 [2024-11-05 12:55:01.414922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.190 [2024-11-05 12:55:01.414946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.190 [2024-11-05 12:55:01.430198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.190 [2024-11-05 12:55:01.430225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.445681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.445719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.455311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.455337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.467310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.467336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.478180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.478219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.491459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.491486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.500977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.501003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.512857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.512912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.523539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.523564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.447 [2024-11-05 12:55:01.534432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.447 [2024-11-05 12:55:01.534457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.548747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.548787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.557936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.557962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.569696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.569720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.585386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.585410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.594962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.594988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.608769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.608795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.618418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.618444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.632954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.632981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.642707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.642731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.656324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.656349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.666021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.666046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.448 [2024-11-05 12:55:01.678063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.448 [2024-11-05 12:55:01.678090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.692414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.692441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.702205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.702247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.717006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.717032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.726391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.726419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.742031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.742058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.751337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.751360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.762920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.762947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.775177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.775202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.787123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.787150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.801655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.801682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.820094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.820119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.829729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.829753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.841492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.841531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.857165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.857206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.866768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.866793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.881856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.881887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.891119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.891162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.902782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.902809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.915301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.915328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.927135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.927162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.706 [2024-11-05 12:55:01.940115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.706 [2024-11-05 12:55:01.940143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:01.949536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:01.949578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:01.961418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:01.961445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:01.972172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:01.972216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:01.983132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:01.983174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:01.994085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:01.994110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.007174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.007202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.016747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.016772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.028517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.028542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.039199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.039223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.050411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.050437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.063231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.063257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.075096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.075124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.088978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.089005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.097960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.097985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.109899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.109925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.124886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.124927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.134418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.134443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.150720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.150759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.160711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.160736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.172665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.172690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.183330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.183355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.964 [2024-11-05 12:55:02.194586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.964 [2024-11-05 12:55:02.194625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.209847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.209896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.219567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.219591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.231436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.231463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.242268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.242293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.256787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.256813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.266585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.266627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.281260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.281287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 11649.00 IOPS, 91.01 MiB/s [2024-11-05T11:55:02.461Z] [2024-11-05 12:55:02.297743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.297769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.307507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.307533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.319680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.319710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.330721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.330746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.344097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.344124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.353236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.353262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.369256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.369297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.378792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.378818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.392719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.392745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.403042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.403070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.415141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.415182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.427883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.427910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.437245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.437271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.453437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.453463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.223 [2024-11-05 12:55:02.463121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.223 [2024-11-05 12:55:02.463163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.474938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.474974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.489516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.489543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.498491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.498517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.512501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.512542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.522806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.522830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.536454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.536480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.545728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.545752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.557383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.557408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.573426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.573469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.582749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.582789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.598972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.598999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.609026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.609054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.620799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.620825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.631785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.631811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.642496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.642523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.658554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.658581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.673996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.674023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.683287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.683311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.694973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.695001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.709751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.709799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.483 [2024-11-05 12:55:02.719483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.483 [2024-11-05 12:55:02.719508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.731380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.731405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.745168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.745196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.754834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.754881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.768642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.768667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.777598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.777623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.789259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.789284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.805198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.805237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.814601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.814641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.829001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.829027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.838161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.838200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.853524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.853564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.863167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.863209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.874882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.874922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.887362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.887389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.897040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.897067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.908610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.908651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.919161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.919185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.934061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.934094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.943208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.943232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.955193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.955231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.968299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.968326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:33.744 [2024-11-05 12:55:02.977949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:33.744 [2024-11-05 12:55:02.977976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:02.989893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:02.989920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.005421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.005461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.015034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.015061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.029035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.029062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.038941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.038967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.053138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.053165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.062885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.062911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.077272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.077298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.086705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.086732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.102869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.102906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.117209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.117237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.126653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.126679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.138642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.138667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.152699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.152725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.162410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.162435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.177107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.177149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.186611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.186636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.202305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.202331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.219582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.219624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.230104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.230144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.003 [2024-11-05 12:55:03.243776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.003 [2024-11-05 12:55:03.243803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.253326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.253350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.265407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.265432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.280049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.280076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.289741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.289766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 11677.33 IOPS, 91.23 MiB/s [2024-11-05T11:55:03.500Z] [2024-11-05 12:55:03.301280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.301304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.312387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.312425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.323035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.323062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.337201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.337228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.346497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.346521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.360756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.360781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.369895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.369920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.381271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.381299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.397398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.397422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.406795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.406821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.420653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.420677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.429509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.429534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.441053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.441079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.451675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.451701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.462197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.462221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.476483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.476511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.485428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.485453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.262 [2024-11-05 12:55:03.497066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.262 [2024-11-05 12:55:03.497094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.506856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.506896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.520543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.520568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.530037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.530064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.541462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.541488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.557415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.557442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.565921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.565948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.577621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.577647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.592534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.592577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.602038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.602073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.613913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.613954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.628956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.628983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.638950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.638977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.652943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.652970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.662520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.662545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.676528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.676552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.686500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.686525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.700209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.700250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.710581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.710620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.726143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.726184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.743945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.743971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.521 [2024-11-05 12:55:03.753850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.521 [2024-11-05 12:55:03.753897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.767654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.779 [2024-11-05 12:55:03.767679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.776813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.779 [2024-11-05 12:55:03.776838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.788481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.779 [2024-11-05 12:55:03.788505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.799021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.779 [2024-11-05 12:55:03.799048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.811352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.779 [2024-11-05 12:55:03.811394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.821810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.779 [2024-11-05 12:55:03.821834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.779 [2024-11-05 12:55:03.835418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.835470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.845252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.845277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.857187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.857225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.872594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.872620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.881793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.881820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.893301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.893325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.903675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.903703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.914665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.914690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.928550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.928575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.938200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.938238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.949926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.949951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.966376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.966401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.981972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.982013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:03.991432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:03.991457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:04.003284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:04.003309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:34.780 [2024-11-05 12:55:04.014563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:34.780 [2024-11-05 12:55:04.014589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.030599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.030625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.045365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.045393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.055426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.055452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.067302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.067334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.081899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.081926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.091600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.091625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.102830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.102855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.116647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.116674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.126576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.126602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.141136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.141161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.150917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.150958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.164798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.164822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.174138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.174181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.185818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.185868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.202707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.202732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.217889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.217930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.227239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.227264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.238963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.239005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.252946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.252973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.262396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.262421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.039 [2024-11-05 12:55:04.277916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.039 [2024-11-05 12:55:04.277958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.287309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.287335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 11704.00 IOPS, 91.44 MiB/s [2024-11-05T11:55:04.537Z] [2024-11-05 12:55:04.299430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.299455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.310368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.310393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.326071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.326099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.335984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.336010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.347965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.347992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.359068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.359095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.371946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.371973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.381456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.381482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.393204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.393246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.409416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.409440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.419133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.419160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.431201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.431226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.442261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.442285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.455493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.455520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.465290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.465317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.477027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.477054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.487256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.487280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.500236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.500280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.509462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.509487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.521589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.521629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.299 [2024-11-05 12:55:04.535875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.299 [2024-11-05 12:55:04.535911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.546000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.546036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.562308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.562334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.578211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.578238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.596111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.596154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.605609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.605635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.617734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.617774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.634120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.634147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.650112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.650140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.659478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.659504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.671827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.671877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.682975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.683003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.695658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.695685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.705892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.705934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.717832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.717866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.732824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.732851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.742439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.742465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.756614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.756654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.767292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.767316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.778465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.778489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.560 [2024-11-05 12:55:04.792755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.560 [2024-11-05 12:55:04.792795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.802397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.802438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.816744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.816768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.826550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.826576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.841046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.841072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.850598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.850625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.865524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.865564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.874993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.875034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.888872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.888898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.820 [2024-11-05 12:55:04.898175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.820 [2024-11-05 12:55:04.898213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.909770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.909795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.924282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.924308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.933789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.933814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.945831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.945855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.961489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.961514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.970954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.970980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.985852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.985885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:04.995489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:04.995514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:05.007282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:05.007306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:05.021662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:05.021704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:05.031078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:05.031104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:05.043402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:05.043427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:35.821 [2024-11-05 12:55:05.054113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:35.821 [2024-11-05 12:55:05.054154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.079 [2024-11-05 12:55:05.069099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.069141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.078581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.078606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.092403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.092427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.101509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.101535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.113039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.113065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.123587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.123611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.134808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.134835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.149415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.149442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.159000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.159026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.172589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.172614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.181871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.181897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.193414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.193438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.209717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.209759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.218808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.218833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.233590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.233616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.242996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.243038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.254694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.254732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.265572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.265612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.280964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.281005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.290182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.290219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 11703.60 IOPS, 91.43 MiB/s [2024-11-05T11:55:05.318Z] [2024-11-05 12:55:05.301012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.301039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 00:40:36.080 Latency(us) 00:40:36.080 [2024-11-05T11:55:05.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.080 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:36.080 Nvme1n1 : 5.01 11704.33 91.44 0.00 0.00 10922.34 2973.39 18738.44 00:40:36.080 [2024-11-05T11:55:05.318Z] =================================================================================================================== 00:40:36.080 [2024-11-05T11:55:05.318Z] Total : 11704.33 91.44 0.00 0.00 10922.34 2973.39 18738.44 00:40:36.080 [2024-11-05 12:55:05.307947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.307972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.080 [2024-11-05 12:55:05.315776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.080 [2024-11-05 12:55:05.315798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.323830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.323883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.331835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.331891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.339827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.339881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.347827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.347884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.355822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.355877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.363831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.363902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.371828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.371883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.379830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.379884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.387831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.387887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.395836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.395893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.403828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.403883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.411828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.411882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.419828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.419881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.427809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.427871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.435775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.435796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.443821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.443874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.451824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.451876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.459806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.459867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.467767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.467785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 [2024-11-05 12:55:05.475769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:36.339 [2024-11-05 12:55:05.475788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:36.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (853705) - No such process 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 853705 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:36.339 delay0 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:36.339 12:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:36.597 [2024-11-05 12:55:05.635010] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:44.719 Initializing NVMe Controllers 00:40:44.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:44.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:44.719 Initialization complete. Launching workers. 00:40:44.719 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 235, failed: 21001 00:40:44.719 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21103, failed to submit 133 00:40:44.719 success 21036, unsuccessful 67, failed 0 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:44.719 rmmod nvme_tcp 00:40:44.719 rmmod nvme_fabrics 00:40:44.719 rmmod nvme_keyring 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 852484 ']' 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 852484 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 852484 ']' 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 852484 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 852484 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 852484' 00:40:44.719 killing process with pid 852484 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 852484 00:40:44.719 12:55:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 852484 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:44.719 12:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:46.102 00:40:46.102 real 0m28.772s 00:40:46.102 user 0m39.035s 00:40:46.102 sys 0m10.886s 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:46.102 ************************************ 00:40:46.102 END TEST nvmf_zcopy 00:40:46.102 ************************************ 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:46.102 ************************************ 00:40:46.102 START TEST nvmf_nmic 00:40:46.102 ************************************ 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:46.102 * Looking for test storage... 00:40:46.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:46.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.102 --rc genhtml_branch_coverage=1 00:40:46.102 --rc genhtml_function_coverage=1 00:40:46.102 --rc genhtml_legend=1 00:40:46.102 --rc geninfo_all_blocks=1 00:40:46.102 --rc geninfo_unexecuted_blocks=1 00:40:46.102 00:40:46.102 ' 00:40:46.102 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:46.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.103 --rc genhtml_branch_coverage=1 00:40:46.103 --rc genhtml_function_coverage=1 00:40:46.103 --rc genhtml_legend=1 00:40:46.103 --rc geninfo_all_blocks=1 00:40:46.103 --rc geninfo_unexecuted_blocks=1 00:40:46.103 00:40:46.103 ' 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:46.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.103 --rc genhtml_branch_coverage=1 00:40:46.103 --rc genhtml_function_coverage=1 00:40:46.103 --rc genhtml_legend=1 00:40:46.103 --rc geninfo_all_blocks=1 00:40:46.103 --rc geninfo_unexecuted_blocks=1 00:40:46.103 00:40:46.103 ' 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:46.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:46.103 --rc genhtml_branch_coverage=1 00:40:46.103 --rc genhtml_function_coverage=1 00:40:46.103 --rc genhtml_legend=1 00:40:46.103 --rc geninfo_all_blocks=1 00:40:46.103 --rc geninfo_unexecuted_blocks=1 00:40:46.103 00:40:46.103 ' 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:46.103 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.361 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:46.362 12:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:48.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:48.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.899 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:48.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:48.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:40:48.900 00:40:48.900 --- 10.0.0.2 ping statistics --- 00:40:48.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.900 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:40:48.900 00:40:48.900 --- 10.0.0.1 ping statistics --- 00:40:48.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.900 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=857201 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 857201 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 857201 ']' 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.900 [2024-11-05 12:55:17.727547] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:48.900 [2024-11-05 12:55:17.728615] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:40:48.900 [2024-11-05 12:55:17.728683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.900 [2024-11-05 12:55:17.800887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:48.900 [2024-11-05 12:55:17.848059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.900 [2024-11-05 12:55:17.848114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.900 [2024-11-05 12:55:17.848144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.900 [2024-11-05 12:55:17.848165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.900 [2024-11-05 12:55:17.848176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.900 [2024-11-05 12:55:17.849737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.900 [2024-11-05 12:55:17.849796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:48.900 [2024-11-05 12:55:17.849896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.900 [2024-11-05 12:55:17.849900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.900 [2024-11-05 12:55:17.933163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:48.900 [2024-11-05 12:55:17.933384] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:48.900 [2024-11-05 12:55:17.933704] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:48.900 [2024-11-05 12:55:17.934353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:48.900 [2024-11-05 12:55:17.934594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:48.900 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:48.901 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:48.901 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:48.901 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 [2024-11-05 12:55:17.990740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 Malloc0 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 [2024-11-05 12:55:18.058997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:48.901 test case1: single bdev can't be used in multiple subsystems 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 [2024-11-05 12:55:18.082700] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:48.901 [2024-11-05 12:55:18.082730] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:48.901 [2024-11-05 12:55:18.082762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:48.901 request: 00:40:48.901 { 00:40:48.901 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:48.901 "namespace": { 00:40:48.901 "bdev_name": "Malloc0", 00:40:48.901 "no_auto_visible": false 00:40:48.901 }, 00:40:48.901 "method": "nvmf_subsystem_add_ns", 00:40:48.901 "req_id": 1 00:40:48.901 } 00:40:48.901 Got JSON-RPC error response 00:40:48.901 response: 00:40:48.901 { 00:40:48.901 "code": -32602, 00:40:48.901 "message": "Invalid parameters" 00:40:48.901 } 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:48.901 Adding namespace failed - expected result. 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:48.901 test case2: host connect to nvmf target in multiple paths 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.901 [2024-11-05 12:55:18.090790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.901 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:49.160 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:49.421 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:49.421 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:40:49.421 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:40:49.421 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:40:49.421 12:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:40:51.956 12:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:51.956 [global] 00:40:51.956 thread=1 00:40:51.956 invalidate=1 00:40:51.956 rw=write 00:40:51.956 time_based=1 00:40:51.956 runtime=1 00:40:51.956 ioengine=libaio 00:40:51.956 direct=1 00:40:51.956 bs=4096 00:40:51.956 iodepth=1 00:40:51.956 norandommap=0 00:40:51.956 numjobs=1 00:40:51.956 00:40:51.956 verify_dump=1 00:40:51.956 verify_backlog=512 00:40:51.956 verify_state_save=0 00:40:51.956 do_verify=1 00:40:51.956 verify=crc32c-intel 00:40:51.956 [job0] 00:40:51.956 filename=/dev/nvme0n1 00:40:51.956 Could not set queue depth (nvme0n1) 00:40:51.956 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:51.956 fio-3.35 00:40:51.956 Starting 1 thread 00:40:52.896 00:40:52.896 job0: (groupid=0, jobs=1): err= 0: pid=857654: Tue Nov 5 12:55:21 2024 00:40:52.896 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:52.896 slat (nsec): min=6288, max=32162, avg=7820.06, stdev=2099.47 00:40:52.896 clat (usec): min=204, max=327, avg=242.94, stdev=14.41 00:40:52.896 lat (usec): min=211, max=348, avg=250.76, stdev=14.65 00:40:52.896 clat percentiles (usec): 00:40:52.896 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:40:52.896 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:40:52.896 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 265], 00:40:52.896 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 318], 99.95th=[ 318], 00:40:52.896 | 99.99th=[ 330] 00:40:52.896 write: IOPS=2553, BW=9.97MiB/s (10.5MB/s)(9.98MiB/1001msec); 0 zone resets 00:40:52.896 slat (nsec): min=8795, max=60626, avg=10307.10, stdev=1533.15 00:40:52.896 clat (usec): min=143, max=1114, avg=175.43, stdev=41.49 00:40:52.896 lat (usec): min=152, max=1124, avg=185.74, stdev=41.63 00:40:52.896 clat percentiles (usec): 00:40:52.896 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 159], 00:40:52.896 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:40:52.896 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 227], 00:40:52.896 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 1004], 99.95th=[ 1074], 00:40:52.896 | 99.99th=[ 1123] 00:40:52.896 bw ( KiB/s): min= 9864, max= 9864, per=96.58%, avg=9864.00, stdev= 0.00, samples=1 00:40:52.896 iops : min= 2466, max= 2466, avg=2466.00, stdev= 0.00, samples=1 00:40:52.896 lat (usec) : 250=84.62%, 500=15.29%, 1000=0.02% 00:40:52.896 lat (msec) : 2=0.07% 00:40:52.896 cpu : usr=2.30%, sys=4.20%, ctx=4604, majf=0, minf=1 00:40:52.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:52.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.896 issued rwts: total=2048,2556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:52.896 00:40:52.896 Run status group 0 (all jobs): 00:40:52.896 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:40:52.896 WRITE: bw=9.97MiB/s (10.5MB/s), 9.97MiB/s-9.97MiB/s (10.5MB/s-10.5MB/s), io=9.98MiB (10.5MB), run=1001-1001msec 00:40:52.896 00:40:52.896 Disk stats (read/write): 00:40:52.896 nvme0n1: ios=2042/2048, merge=0/0, ticks=496/369, in_queue=865, util=91.28% 00:40:52.896 12:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:52.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.896 rmmod nvme_tcp 00:40:52.896 rmmod nvme_fabrics 00:40:52.896 rmmod nvme_keyring 00:40:52.896 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:53.154 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:53.154 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:53.154 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 857201 ']' 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 857201 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 857201 ']' 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 857201 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 857201 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 857201' 00:40:53.155 killing process with pid 857201 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 857201 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 857201 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.155 12:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:55.776 00:40:55.776 real 0m9.239s 00:40:55.776 user 0m16.948s 00:40:55.776 sys 0m3.601s 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:55.776 ************************************ 00:40:55.776 END TEST nvmf_nmic 00:40:55.776 ************************************ 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:55.776 ************************************ 00:40:55.776 START TEST nvmf_fio_target 00:40:55.776 ************************************ 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:55.776 * Looking for test storage... 00:40:55.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:55.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.776 --rc genhtml_branch_coverage=1 00:40:55.776 --rc genhtml_function_coverage=1 00:40:55.776 --rc genhtml_legend=1 00:40:55.776 --rc geninfo_all_blocks=1 00:40:55.776 --rc geninfo_unexecuted_blocks=1 00:40:55.776 00:40:55.776 ' 00:40:55.776 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:55.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.777 --rc genhtml_branch_coverage=1 00:40:55.777 --rc genhtml_function_coverage=1 00:40:55.777 --rc genhtml_legend=1 00:40:55.777 --rc geninfo_all_blocks=1 00:40:55.777 --rc geninfo_unexecuted_blocks=1 00:40:55.777 00:40:55.777 ' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:55.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.777 --rc genhtml_branch_coverage=1 00:40:55.777 --rc genhtml_function_coverage=1 00:40:55.777 --rc genhtml_legend=1 00:40:55.777 --rc geninfo_all_blocks=1 00:40:55.777 --rc geninfo_unexecuted_blocks=1 00:40:55.777 00:40:55.777 ' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:55.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.777 --rc genhtml_branch_coverage=1 00:40:55.777 --rc genhtml_function_coverage=1 00:40:55.777 --rc genhtml_legend=1 00:40:55.777 --rc geninfo_all_blocks=1 00:40:55.777 --rc geninfo_unexecuted_blocks=1 00:40:55.777 00:40:55.777 ' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.777 12:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:57.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:57.681 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.681 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:57.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:57.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:57.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:40:57.682 00:40:57.682 --- 10.0.0.2 ping statistics --- 00:40:57.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.682 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:57.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:40:57.682 00:40:57.682 --- 10.0.0.1 ping statistics --- 00:40:57.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.682 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:57.682 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=859780 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 859780 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 859780 ']' 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:57.941 12:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:57.941 [2024-11-05 12:55:26.996008] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:57.941 [2024-11-05 12:55:26.997092] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:40:57.941 [2024-11-05 12:55:26.997161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:57.941 [2024-11-05 12:55:27.068369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:57.941 [2024-11-05 12:55:27.114456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:57.941 [2024-11-05 12:55:27.114510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:57.941 [2024-11-05 12:55:27.114539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:57.941 [2024-11-05 12:55:27.114550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:57.941 [2024-11-05 12:55:27.114559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:57.941 [2024-11-05 12:55:27.116035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.941 [2024-11-05 12:55:27.116098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:57.941 [2024-11-05 12:55:27.116172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:57.941 [2024-11-05 12:55:27.116176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.198 [2024-11-05 12:55:27.198670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:58.199 [2024-11-05 12:55:27.198912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:58.199 [2024-11-05 12:55:27.199143] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:58.199 [2024-11-05 12:55:27.199695] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:58.199 [2024-11-05 12:55:27.199957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:58.199 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:58.456 [2024-11-05 12:55:27.512950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:58.456 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:58.716 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:58.716 12:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:58.976 12:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:58.976 12:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:59.236 12:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:59.236 12:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:59.802 12:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:59.802 12:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:59.802 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.367 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:00.368 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.368 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:00.368 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:00.934 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:00.934 12:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:01.193 12:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:01.451 12:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:01.451 12:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:01.709 12:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:01.709 12:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:01.966 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:02.223 [2024-11-05 12:55:31.269073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:02.223 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:02.481 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:41:02.740 12:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:41:05.271 12:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:05.271 [global] 00:41:05.271 thread=1 00:41:05.271 invalidate=1 00:41:05.271 rw=write 00:41:05.271 time_based=1 00:41:05.271 runtime=1 00:41:05.271 ioengine=libaio 00:41:05.271 direct=1 00:41:05.271 bs=4096 00:41:05.271 iodepth=1 00:41:05.271 norandommap=0 00:41:05.271 numjobs=1 00:41:05.271 00:41:05.271 verify_dump=1 00:41:05.271 verify_backlog=512 00:41:05.271 verify_state_save=0 00:41:05.271 do_verify=1 00:41:05.271 verify=crc32c-intel 00:41:05.271 [job0] 00:41:05.271 filename=/dev/nvme0n1 00:41:05.271 [job1] 00:41:05.271 filename=/dev/nvme0n2 00:41:05.271 [job2] 00:41:05.271 filename=/dev/nvme0n3 00:41:05.271 [job3] 00:41:05.271 filename=/dev/nvme0n4 00:41:05.271 Could not set queue depth (nvme0n1) 00:41:05.271 Could not set queue depth (nvme0n2) 00:41:05.271 Could not set queue depth (nvme0n3) 00:41:05.271 Could not set queue depth (nvme0n4) 00:41:05.271 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.271 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.271 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.271 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:05.271 fio-3.35 00:41:05.271 Starting 4 threads 00:41:06.645 00:41:06.645 job0: (groupid=0, jobs=1): err= 0: pid=860725: Tue Nov 5 12:55:35 2024 00:41:06.645 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:41:06.645 slat (nsec): min=12697, max=28088, avg=13797.68, stdev=3225.42 00:41:06.645 clat (usec): min=40341, max=41029, avg=40955.25, stdev=138.13 00:41:06.645 lat (usec): min=40370, max=41043, avg=40969.05, stdev=135.00 00:41:06.645 clat percentiles (usec): 00:41:06.645 | 1.00th=[40109], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:06.645 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:06.645 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:06.645 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:06.645 | 99.99th=[41157] 00:41:06.645 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:41:06.645 slat (nsec): min=6785, max=38708, avg=9729.28, stdev=4539.55 00:41:06.645 clat (usec): min=148, max=419, avg=233.35, stdev=38.66 00:41:06.645 lat (usec): min=161, max=431, avg=243.08, stdev=38.25 00:41:06.645 clat percentiles (usec): 00:41:06.645 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 180], 20.00th=[ 212], 00:41:06.645 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:41:06.645 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 297], 00:41:06.645 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 420], 99.95th=[ 420], 00:41:06.645 | 99.99th=[ 420] 00:41:06.645 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:41:06.645 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:06.645 lat (usec) : 250=73.03%, 500=22.85% 00:41:06.645 lat (msec) : 50=4.12% 00:41:06.645 cpu : usr=0.29%, sys=0.39%, ctx=535, majf=0, minf=1 00:41:06.645 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.645 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.645 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.645 job1: (groupid=0, jobs=1): err= 0: pid=860726: Tue Nov 5 12:55:35 2024 00:41:06.645 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:41:06.645 slat (nsec): min=7010, max=29846, avg=13883.23, stdev=3841.92 00:41:06.645 clat (usec): min=40867, max=41112, avg=40987.85, stdev=55.85 00:41:06.646 lat (usec): min=40897, max=41119, avg=41001.73, stdev=53.67 00:41:06.646 clat percentiles (usec): 00:41:06.646 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:06.646 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:06.646 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:06.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:06.646 | 99.99th=[41157] 00:41:06.646 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:41:06.646 slat (nsec): min=5893, max=81313, avg=10155.52, stdev=6182.60 00:41:06.646 clat (usec): min=154, max=1294, avg=249.04, stdev=94.31 00:41:06.646 lat (usec): min=170, max=1315, avg=259.20, stdev=97.14 00:41:06.646 clat percentiles (usec): 00:41:06.646 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:41:06.646 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 229], 00:41:06.646 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 355], 95.00th=[ 424], 00:41:06.646 | 99.00th=[ 510], 99.50th=[ 857], 99.90th=[ 1303], 99.95th=[ 1303], 00:41:06.646 | 99.99th=[ 1303] 00:41:06.646 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:41:06.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:06.646 lat (usec) : 250=71.16%, 500=23.41%, 750=0.75%, 1000=0.37% 00:41:06.646 lat (msec) : 2=0.19%, 50=4.12% 00:41:06.646 cpu : usr=0.48%, sys=0.29%, ctx=535, majf=0, minf=1 00:41:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.646 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.646 job2: (groupid=0, jobs=1): err= 0: pid=860727: Tue Nov 5 12:55:35 2024 00:41:06.646 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:41:06.646 slat (nsec): min=8902, max=35808, avg=14894.36, stdev=4822.24 00:41:06.646 clat (usec): min=40915, max=41205, avg=40989.32, stdev=62.47 00:41:06.646 lat (usec): min=40929, max=41214, avg=41004.21, stdev=60.68 00:41:06.646 clat percentiles (usec): 00:41:06.646 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:06.646 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:06.646 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:06.646 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:06.646 | 99.99th=[41157] 00:41:06.646 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:41:06.646 slat (nsec): min=6326, max=30785, avg=9277.06, stdev=3756.75 00:41:06.646 clat (usec): min=169, max=305, avg=193.37, stdev=14.02 00:41:06.646 lat (usec): min=178, max=313, avg=202.65, stdev=14.84 00:41:06.646 clat percentiles (usec): 00:41:06.646 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:41:06.646 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:41:06.646 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 219], 00:41:06.646 | 99.00th=[ 241], 99.50th=[ 265], 99.90th=[ 306], 99.95th=[ 306], 00:41:06.646 | 99.99th=[ 306] 00:41:06.646 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:41:06.646 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:06.646 lat (usec) : 250=95.32%, 500=0.56% 00:41:06.646 lat (msec) : 50=4.12% 00:41:06.646 cpu : usr=0.30%, sys=0.30%, ctx=535, majf=0, minf=1 00:41:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.646 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.646 job3: (groupid=0, jobs=1): err= 0: pid=860728: Tue Nov 5 12:55:35 2024 00:41:06.646 read: IOPS=1831, BW=7325KiB/s (7500kB/s)(7332KiB/1001msec) 00:41:06.646 slat (nsec): min=5754, max=68291, avg=13886.98, stdev=6472.22 00:41:06.646 clat (usec): min=216, max=493, avg=263.63, stdev=19.65 00:41:06.646 lat (usec): min=224, max=511, avg=277.52, stdev=22.52 00:41:06.646 clat percentiles (usec): 00:41:06.646 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 249], 00:41:06.646 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:41:06.646 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:41:06.646 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 412], 99.95th=[ 494], 00:41:06.646 | 99.99th=[ 494] 00:41:06.646 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:06.646 slat (nsec): min=6981, max=74687, avg=15766.55, stdev=7389.56 00:41:06.646 clat (usec): min=149, max=1185, avg=216.71, stdev=54.30 00:41:06.646 lat (usec): min=157, max=1199, avg=232.48, stdev=55.36 00:41:06.646 clat percentiles (usec): 00:41:06.646 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 184], 00:41:06.646 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 221], 00:41:06.646 | 70.00th=[ 229], 80.00th=[ 245], 90.00th=[ 285], 95.00th=[ 297], 00:41:06.646 | 99.00th=[ 359], 99.50th=[ 396], 99.90th=[ 865], 99.95th=[ 873], 00:41:06.646 | 99.99th=[ 1188] 00:41:06.646 bw ( KiB/s): min= 8192, max= 8192, per=59.20%, avg=8192.00, stdev= 0.00, samples=1 00:41:06.646 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:06.646 lat (usec) : 250=53.65%, 500=46.25%, 750=0.03%, 1000=0.05% 00:41:06.646 lat (msec) : 2=0.03% 00:41:06.646 cpu : usr=3.70%, sys=8.00%, ctx=3886, majf=0, minf=1 00:41:06.646 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.646 issued rwts: total=1833,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.646 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:06.646 00:41:06.646 Run status group 0 (all jobs): 00:41:06.646 READ: bw=7332KiB/s (7508kB/s), 84.9KiB/s-7325KiB/s (87.0kB/s-7500kB/s), io=7596KiB (7778kB), run=1001-1036msec 00:41:06.646 WRITE: bw=13.5MiB/s (14.2MB/s), 1977KiB/s-8184KiB/s (2024kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1036msec 00:41:06.646 00:41:06.646 Disk stats (read/write): 00:41:06.646 nvme0n1: ios=67/512, merge=0/0, ticks=721/117, in_queue=838, util=86.97% 00:41:06.646 nvme0n2: ios=67/512, merge=0/0, ticks=766/129, in_queue=895, util=90.74% 00:41:06.646 nvme0n3: ios=44/512, merge=0/0, ticks=1644/101, in_queue=1745, util=93.63% 00:41:06.646 nvme0n4: ios=1593/1716, merge=0/0, ticks=606/383, in_queue=989, util=94.32% 00:41:06.646 12:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:06.646 [global] 00:41:06.646 thread=1 00:41:06.646 invalidate=1 00:41:06.646 rw=randwrite 00:41:06.646 time_based=1 00:41:06.646 runtime=1 00:41:06.646 ioengine=libaio 00:41:06.646 direct=1 00:41:06.647 bs=4096 00:41:06.647 iodepth=1 00:41:06.647 norandommap=0 00:41:06.647 numjobs=1 00:41:06.647 00:41:06.647 verify_dump=1 00:41:06.647 verify_backlog=512 00:41:06.647 verify_state_save=0 00:41:06.647 do_verify=1 00:41:06.647 verify=crc32c-intel 00:41:06.647 [job0] 00:41:06.647 filename=/dev/nvme0n1 00:41:06.647 [job1] 00:41:06.647 filename=/dev/nvme0n2 00:41:06.647 [job2] 00:41:06.647 filename=/dev/nvme0n3 00:41:06.647 [job3] 00:41:06.647 filename=/dev/nvme0n4 00:41:06.647 Could not set queue depth (nvme0n1) 00:41:06.647 Could not set queue depth (nvme0n2) 00:41:06.647 Could not set queue depth (nvme0n3) 00:41:06.647 Could not set queue depth (nvme0n4) 00:41:06.647 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.647 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.647 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.647 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:06.647 fio-3.35 00:41:06.647 Starting 4 threads 00:41:08.022 00:41:08.022 job0: (groupid=0, jobs=1): err= 0: pid=861073: Tue Nov 5 12:55:36 2024 00:41:08.022 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:41:08.022 slat (nsec): min=6745, max=18742, avg=14299.32, stdev=2264.68 00:41:08.022 clat (usec): min=40604, max=41022, avg=40966.13, stdev=81.94 00:41:08.022 lat (usec): min=40611, max=41041, avg=40980.43, stdev=83.76 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:08.022 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.022 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.022 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:08.022 | 99.99th=[41157] 00:41:08.022 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:41:08.022 slat (nsec): min=6100, max=36847, avg=8455.05, stdev=4024.88 00:41:08.022 clat (usec): min=159, max=3841, avg=217.86, stdev=168.57 00:41:08.022 lat (usec): min=166, max=3847, avg=226.32, stdev=169.00 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 178], 00:41:08.022 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:41:08.022 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 269], 95.00th=[ 355], 00:41:08.022 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 3851], 99.95th=[ 3851], 00:41:08.022 | 99.99th=[ 3851] 00:41:08.022 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:08.022 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:08.022 lat (usec) : 250=83.52%, 500=12.17% 00:41:08.022 lat (msec) : 4=0.19%, 50=4.12% 00:41:08.022 cpu : usr=0.29%, sys=0.29%, ctx=534, majf=0, minf=1 00:41:08.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.022 job1: (groupid=0, jobs=1): err= 0: pid=861074: Tue Nov 5 12:55:36 2024 00:41:08.022 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:41:08.022 slat (nsec): min=7036, max=18082, avg=13909.00, stdev=2223.68 00:41:08.022 clat (usec): min=21751, max=41291, avg=40121.28, stdev=4103.57 00:41:08.022 lat (usec): min=21769, max=41298, avg=40135.19, stdev=4102.63 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[21627], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:08.022 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.022 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.022 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:08.022 | 99.99th=[41157] 00:41:08.022 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:41:08.022 slat (nsec): min=6326, max=24602, avg=8436.60, stdev=3090.57 00:41:08.022 clat (usec): min=153, max=841, avg=226.05, stdev=59.65 00:41:08.022 lat (usec): min=161, max=848, avg=234.49, stdev=59.64 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 182], 00:41:08.022 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 227], 60.00th=[ 239], 00:41:08.022 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 289], 00:41:08.022 | 99.00th=[ 379], 99.50th=[ 693], 99.90th=[ 840], 99.95th=[ 840], 00:41:08.022 | 99.99th=[ 840] 00:41:08.022 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:08.022 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:08.022 lat (usec) : 250=76.03%, 500=19.10%, 750=0.37%, 1000=0.37% 00:41:08.022 lat (msec) : 50=4.12% 00:41:08.022 cpu : usr=0.20%, sys=0.30%, ctx=536, majf=0, minf=1 00:41:08.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.022 job2: (groupid=0, jobs=1): err= 0: pid=861075: Tue Nov 5 12:55:36 2024 00:41:08.022 read: IOPS=1483, BW=5935KiB/s (6077kB/s)(6172KiB/1040msec) 00:41:08.022 slat (nsec): min=5548, max=22645, avg=6609.91, stdev=2154.39 00:41:08.022 clat (usec): min=205, max=41066, avg=405.98, stdev=2735.79 00:41:08.022 lat (usec): min=211, max=41085, avg=412.59, stdev=2736.43 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 210], 20.00th=[ 212], 00:41:08.022 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:41:08.022 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 237], 95.00th=[ 245], 00:41:08.022 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[41157], 99.95th=[41157], 00:41:08.022 | 99.99th=[41157] 00:41:08.022 write: IOPS=1969, BW=7877KiB/s (8066kB/s)(8192KiB/1040msec); 0 zone resets 00:41:08.022 slat (nsec): min=6763, max=35299, avg=9108.34, stdev=3055.50 00:41:08.022 clat (usec): min=143, max=532, avg=183.61, stdev=55.80 00:41:08.022 lat (usec): min=155, max=545, avg=192.72, stdev=57.18 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[ 151], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:41:08.022 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 165], 00:41:08.022 | 70.00th=[ 174], 80.00th=[ 194], 90.00th=[ 258], 95.00th=[ 306], 00:41:08.022 | 99.00th=[ 412], 99.50th=[ 437], 99.90th=[ 474], 99.95th=[ 490], 00:41:08.022 | 99.99th=[ 537] 00:41:08.022 bw ( KiB/s): min= 6344, max=10040, per=52.00%, avg=8192.00, stdev=2613.47, samples=2 00:41:08.022 iops : min= 1586, max= 2510, avg=2048.00, stdev=653.37, samples=2 00:41:08.022 lat (usec) : 250=92.09%, 500=7.69%, 750=0.03% 00:41:08.022 lat (msec) : 50=0.19% 00:41:08.022 cpu : usr=2.79%, sys=2.89%, ctx=3592, majf=0, minf=1 00:41:08.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 issued rwts: total=1543,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.022 job3: (groupid=0, jobs=1): err= 0: pid=861076: Tue Nov 5 12:55:36 2024 00:41:08.022 read: IOPS=759, BW=3037KiB/s (3110kB/s)(3040KiB/1001msec) 00:41:08.022 slat (nsec): min=4950, max=32922, avg=8123.30, stdev=3750.37 00:41:08.022 clat (usec): min=202, max=41156, avg=1004.93, stdev=5513.77 00:41:08.022 lat (usec): min=208, max=41162, avg=1013.06, stdev=5514.51 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 215], 00:41:08.022 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:41:08.022 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 355], 00:41:08.022 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:08.022 | 99.99th=[41157] 00:41:08.022 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:41:08.022 slat (nsec): min=6711, max=34718, avg=10069.16, stdev=3740.36 00:41:08.022 clat (usec): min=146, max=629, avg=210.07, stdev=73.22 00:41:08.022 lat (usec): min=154, max=646, avg=220.14, stdev=74.36 00:41:08.022 clat percentiles (usec): 00:41:08.022 | 1.00th=[ 151], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:41:08.022 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 174], 60.00th=[ 208], 00:41:08.022 | 70.00th=[ 235], 80.00th=[ 260], 90.00th=[ 314], 95.00th=[ 383], 00:41:08.022 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 515], 99.95th=[ 627], 00:41:08.022 | 99.99th=[ 627] 00:41:08.022 bw ( KiB/s): min= 4096, max= 4096, per=26.00%, avg=4096.00, stdev= 0.00, samples=1 00:41:08.022 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:08.022 lat (usec) : 250=82.90%, 500=16.09%, 750=0.17% 00:41:08.022 lat (msec) : 20=0.06%, 50=0.78% 00:41:08.022 cpu : usr=0.90%, sys=1.60%, ctx=1785, majf=0, minf=1 00:41:08.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.022 issued rwts: total=760,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.022 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.022 00:41:08.022 Run status group 0 (all jobs): 00:41:08.022 READ: bw=9027KiB/s (9244kB/s), 86.4KiB/s-5935KiB/s (88.4kB/s-6077kB/s), io=9388KiB (9613kB), run=1001-1040msec 00:41:08.022 WRITE: bw=15.4MiB/s (16.1MB/s), 2010KiB/s-7877KiB/s (2058kB/s-8066kB/s), io=16.0MiB (16.8MB), run=1001-1040msec 00:41:08.022 00:41:08.022 Disk stats (read/write): 00:41:08.022 nvme0n1: ios=67/512, merge=0/0, ticks=718/101, in_queue=819, util=86.97% 00:41:08.022 nvme0n2: ios=67/512, merge=0/0, ticks=930/113, in_queue=1043, util=94.31% 00:41:08.022 nvme0n3: ios=1595/2048, merge=0/0, ticks=1336/349, in_queue=1685, util=98.13% 00:41:08.022 nvme0n4: ios=543/524, merge=0/0, ticks=845/132, in_queue=977, util=96.02% 00:41:08.022 12:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:08.022 [global] 00:41:08.022 thread=1 00:41:08.022 invalidate=1 00:41:08.022 rw=write 00:41:08.022 time_based=1 00:41:08.022 runtime=1 00:41:08.022 ioengine=libaio 00:41:08.022 direct=1 00:41:08.022 bs=4096 00:41:08.022 iodepth=128 00:41:08.022 norandommap=0 00:41:08.022 numjobs=1 00:41:08.022 00:41:08.022 verify_dump=1 00:41:08.022 verify_backlog=512 00:41:08.022 verify_state_save=0 00:41:08.022 do_verify=1 00:41:08.022 verify=crc32c-intel 00:41:08.022 [job0] 00:41:08.022 filename=/dev/nvme0n1 00:41:08.022 [job1] 00:41:08.022 filename=/dev/nvme0n2 00:41:08.022 [job2] 00:41:08.022 filename=/dev/nvme0n3 00:41:08.022 [job3] 00:41:08.022 filename=/dev/nvme0n4 00:41:08.022 Could not set queue depth (nvme0n1) 00:41:08.022 Could not set queue depth (nvme0n2) 00:41:08.022 Could not set queue depth (nvme0n3) 00:41:08.022 Could not set queue depth (nvme0n4) 00:41:08.022 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.022 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.022 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.022 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:08.022 fio-3.35 00:41:08.022 Starting 4 threads 00:41:09.394 00:41:09.394 job0: (groupid=0, jobs=1): err= 0: pid=861302: Tue Nov 5 12:55:38 2024 00:41:09.394 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:41:09.394 slat (usec): min=2, max=3826, avg=85.61, stdev=444.40 00:41:09.394 clat (usec): min=6367, max=15440, avg=11135.55, stdev=1302.78 00:41:09.394 lat (usec): min=6984, max=15460, avg=11221.17, stdev=1326.04 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:41:09.394 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:41:09.394 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12911], 95.00th=[13304], 00:41:09.394 | 99.00th=[14091], 99.50th=[14484], 99.90th=[14877], 99.95th=[15008], 00:41:09.394 | 99.99th=[15401] 00:41:09.394 write: IOPS=5803, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1004msec); 0 zone resets 00:41:09.394 slat (usec): min=3, max=6058, avg=82.01, stdev=408.06 00:41:09.394 clat (usec): min=448, max=15558, avg=10971.78, stdev=1269.86 00:41:09.394 lat (usec): min=3597, max=15577, avg=11053.79, stdev=1282.87 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10552], 00:41:09.394 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:41:09.394 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:41:09.394 | 99.00th=[14222], 99.50th=[14746], 99.90th=[14877], 99.95th=[15270], 00:41:09.394 | 99.99th=[15533] 00:41:09.394 bw ( KiB/s): min=21016, max=24576, per=35.04%, avg=22796.00, stdev=2517.30, samples=2 00:41:09.394 iops : min= 5254, max= 6144, avg=5699.00, stdev=629.33, samples=2 00:41:09.394 lat (usec) : 500=0.01% 00:41:09.394 lat (msec) : 4=0.36%, 10=15.44%, 20=84.20% 00:41:09.394 cpu : usr=5.58%, sys=8.57%, ctx=565, majf=0, minf=1 00:41:09.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:41:09.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.394 issued rwts: total=5632,5827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.394 job1: (groupid=0, jobs=1): err= 0: pid=861303: Tue Nov 5 12:55:38 2024 00:41:09.394 read: IOPS=3676, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1009msec) 00:41:09.394 slat (usec): min=2, max=24098, avg=115.11, stdev=912.31 00:41:09.394 clat (usec): min=1201, max=75468, avg=16084.76, stdev=10971.19 00:41:09.394 lat (usec): min=1236, max=75489, avg=16199.87, stdev=11041.40 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 1860], 5.00th=[ 6194], 10.00th=[ 9241], 20.00th=[10945], 00:41:09.394 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:41:09.394 | 70.00th=[14091], 80.00th=[21627], 90.00th=[27919], 95.00th=[35390], 00:41:09.394 | 99.00th=[67634], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:41:09.394 | 99.99th=[74974] 00:41:09.394 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:41:09.394 slat (usec): min=3, max=20242, avg=117.48, stdev=784.44 00:41:09.394 clat (usec): min=1241, max=75487, avg=16517.28, stdev=10728.82 00:41:09.394 lat (usec): min=1246, max=75495, avg=16634.76, stdev=10792.26 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 3458], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[11076], 00:41:09.394 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11863], 60.00th=[13435], 00:41:09.394 | 70.00th=[21103], 80.00th=[23200], 90.00th=[24249], 95.00th=[32637], 00:41:09.394 | 99.00th=[66323], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], 00:41:09.394 | 99.99th=[74974] 00:41:09.394 bw ( KiB/s): min=13632, max=19120, per=25.17%, avg=16376.00, stdev=3880.60, samples=2 00:41:09.394 iops : min= 3408, max= 4780, avg=4094.00, stdev=970.15, samples=2 00:41:09.394 lat (msec) : 2=1.15%, 4=1.72%, 10=10.27%, 20=59.35%, 50=24.67% 00:41:09.394 lat (msec) : 100=2.83% 00:41:09.394 cpu : usr=3.17%, sys=4.66%, ctx=390, majf=0, minf=1 00:41:09.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:09.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.394 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.394 job2: (groupid=0, jobs=1): err= 0: pid=861304: Tue Nov 5 12:55:38 2024 00:41:09.394 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:41:09.394 slat (usec): min=2, max=6004, avg=107.57, stdev=578.96 00:41:09.394 clat (usec): min=9090, max=21564, avg=13735.19, stdev=2573.11 00:41:09.394 lat (usec): min=9101, max=21742, avg=13842.76, stdev=2578.77 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 00:41:09.394 | 30.00th=[12649], 40.00th=[12911], 50.00th=[12911], 60.00th=[13173], 00:41:09.394 | 70.00th=[13566], 80.00th=[14484], 90.00th=[18482], 95.00th=[20055], 00:41:09.394 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:41:09.394 | 99.99th=[21627] 00:41:09.394 write: IOPS=3794, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1004msec); 0 zone resets 00:41:09.394 slat (usec): min=3, max=28543, avg=154.90, stdev=1217.55 00:41:09.394 clat (usec): min=3484, max=86593, avg=20314.22, stdev=17781.51 00:41:09.394 lat (usec): min=4096, max=86605, avg=20469.13, stdev=17880.13 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 7439], 5.00th=[11469], 10.00th=[11863], 20.00th=[12125], 00:41:09.394 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:41:09.394 | 70.00th=[13829], 80.00th=[19792], 90.00th=[48497], 95.00th=[72877], 00:41:09.394 | 99.00th=[82314], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:41:09.394 | 99.99th=[86508] 00:41:09.394 bw ( KiB/s): min= 9992, max=19472, per=22.65%, avg=14732.00, stdev=6703.37, samples=2 00:41:09.394 iops : min= 2498, max= 4868, avg=3683.00, stdev=1675.84, samples=2 00:41:09.394 lat (msec) : 4=0.01%, 10=2.37%, 20=84.78%, 50=7.71%, 100=5.13% 00:41:09.394 cpu : usr=2.59%, sys=5.58%, ctx=327, majf=0, minf=1 00:41:09.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:41:09.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.394 issued rwts: total=3584,3810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.394 job3: (groupid=0, jobs=1): err= 0: pid=861305: Tue Nov 5 12:55:38 2024 00:41:09.394 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:41:09.394 slat (usec): min=3, max=16770, avg=161.54, stdev=1125.74 00:41:09.394 clat (usec): min=4590, max=47480, avg=19222.27, stdev=6777.42 00:41:09.394 lat (usec): min=4599, max=47490, avg=19383.81, stdev=6879.69 00:41:09.394 clat percentiles (usec): 00:41:09.394 | 1.00th=[ 8356], 5.00th=[13173], 10.00th=[13566], 20.00th=[13566], 00:41:09.394 | 30.00th=[14222], 40.00th=[15270], 50.00th=[16057], 60.00th=[19792], 00:41:09.394 | 70.00th=[21890], 80.00th=[23725], 90.00th=[28181], 95.00th=[33162], 00:41:09.394 | 99.00th=[42206], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:41:09.394 | 99.99th=[47449] 00:41:09.394 write: IOPS=2733, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1015msec); 0 zone resets 00:41:09.394 slat (usec): min=3, max=23816, avg=205.14, stdev=1143.24 00:41:09.394 clat (msec): min=3, max=112, avg=28.40, stdev=17.14 00:41:09.394 lat (msec): min=3, max=112, avg=28.60, stdev=17.24 00:41:09.394 clat percentiles (msec): 00:41:09.394 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 18], 00:41:09.394 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:41:09.394 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 63], 00:41:09.394 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 113], 99.95th=[ 113], 00:41:09.394 | 99.99th=[ 113] 00:41:09.394 bw ( KiB/s): min= 8880, max=12288, per=16.27%, avg=10584.00, stdev=2409.82, samples=2 00:41:09.394 iops : min= 2220, max= 3072, avg=2646.00, stdev=602.45, samples=2 00:41:09.394 lat (msec) : 4=0.11%, 10=1.56%, 20=40.12%, 50=55.08%, 100=2.29% 00:41:09.394 lat (msec) : 250=0.84% 00:41:09.394 cpu : usr=2.37%, sys=4.93%, ctx=277, majf=0, minf=1 00:41:09.394 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:09.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:09.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:09.394 issued rwts: total=2560,2774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:09.394 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:09.394 00:41:09.394 Run status group 0 (all jobs): 00:41:09.394 READ: bw=59.6MiB/s (62.5MB/s), 9.85MiB/s-21.9MiB/s (10.3MB/s-23.0MB/s), io=60.5MiB (63.4MB), run=1004-1015msec 00:41:09.395 WRITE: bw=63.5MiB/s (66.6MB/s), 10.7MiB/s-22.7MiB/s (11.2MB/s-23.8MB/s), io=64.5MiB (67.6MB), run=1004-1015msec 00:41:09.395 00:41:09.395 Disk stats (read/write): 00:41:09.395 nvme0n1: ios=4678/5120, merge=0/0, ticks=17054/17049, in_queue=34103, util=97.09% 00:41:09.395 nvme0n2: ios=3494/3584, merge=0/0, ticks=42840/35235, in_queue=78075, util=96.65% 00:41:09.395 nvme0n3: ios=2790/3072, merge=0/0, ticks=11465/18412, in_queue=29877, util=88.97% 00:41:09.395 nvme0n4: ios=2094/2191, merge=0/0, ticks=40532/54494, in_queue=95026, util=99.48% 00:41:09.395 12:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:09.395 [global] 00:41:09.395 thread=1 00:41:09.395 invalidate=1 00:41:09.395 rw=randwrite 00:41:09.395 time_based=1 00:41:09.395 runtime=1 00:41:09.395 ioengine=libaio 00:41:09.395 direct=1 00:41:09.395 bs=4096 00:41:09.395 iodepth=128 00:41:09.395 norandommap=0 00:41:09.395 numjobs=1 00:41:09.395 00:41:09.395 verify_dump=1 00:41:09.395 verify_backlog=512 00:41:09.395 verify_state_save=0 00:41:09.395 do_verify=1 00:41:09.395 verify=crc32c-intel 00:41:09.395 [job0] 00:41:09.395 filename=/dev/nvme0n1 00:41:09.395 [job1] 00:41:09.395 filename=/dev/nvme0n2 00:41:09.395 [job2] 00:41:09.395 filename=/dev/nvme0n3 00:41:09.395 [job3] 00:41:09.395 filename=/dev/nvme0n4 00:41:09.395 Could not set queue depth (nvme0n1) 00:41:09.395 Could not set queue depth (nvme0n2) 00:41:09.395 Could not set queue depth (nvme0n3) 00:41:09.395 Could not set queue depth (nvme0n4) 00:41:09.395 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:09.395 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:09.395 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:09.395 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:09.395 fio-3.35 00:41:09.395 Starting 4 threads 00:41:10.772 00:41:10.772 job0: (groupid=0, jobs=1): err= 0: pid=861530: Tue Nov 5 12:55:39 2024 00:41:10.772 read: IOPS=3779, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1002msec) 00:41:10.772 slat (nsec): min=1948, max=19378k, avg=117777.36, stdev=840553.88 00:41:10.772 clat (usec): min=715, max=71844, avg=14725.50, stdev=9226.79 00:41:10.772 lat (usec): min=3809, max=71865, avg=14843.28, stdev=9315.39 00:41:10.772 clat percentiles (usec): 00:41:10.772 | 1.00th=[ 4359], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:41:10.772 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[12256], 00:41:10.772 | 70.00th=[13566], 80.00th=[15401], 90.00th=[23725], 95.00th=[35390], 00:41:10.772 | 99.00th=[52691], 99.50th=[58459], 99.90th=[58459], 99.95th=[71828], 00:41:10.772 | 99.99th=[71828] 00:41:10.772 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:41:10.772 slat (usec): min=3, max=22625, avg=125.99, stdev=928.81 00:41:10.772 clat (usec): min=4767, max=74985, avg=17333.11, stdev=12130.77 00:41:10.772 lat (usec): min=4800, max=74999, avg=17459.10, stdev=12215.44 00:41:10.772 clat percentiles (usec): 00:41:10.772 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[11207], 00:41:10.772 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:41:10.772 | 70.00th=[13435], 80.00th=[22414], 90.00th=[35914], 95.00th=[44827], 00:41:10.772 | 99.00th=[57934], 99.50th=[66847], 99.90th=[66847], 99.95th=[67634], 00:41:10.772 | 99.99th=[74974] 00:41:10.772 bw ( KiB/s): min=12288, max=20480, per=24.46%, avg=16384.00, stdev=5792.62, samples=2 00:41:10.772 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:41:10.772 lat (usec) : 750=0.01% 00:41:10.772 lat (msec) : 4=0.20%, 10=11.86%, 20=71.25%, 50=13.22%, 100=3.45% 00:41:10.772 cpu : usr=4.50%, sys=6.99%, ctx=303, majf=0, minf=1 00:41:10.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:10.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:10.772 issued rwts: total=3787,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:10.772 job1: (groupid=0, jobs=1): err= 0: pid=861531: Tue Nov 5 12:55:39 2024 00:41:10.772 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:41:10.772 slat (usec): min=3, max=16224, avg=115.11, stdev=889.50 00:41:10.772 clat (usec): min=6280, max=43682, avg=14703.97, stdev=4769.21 00:41:10.772 lat (usec): min=6288, max=43690, avg=14819.08, stdev=4846.82 00:41:10.772 clat percentiles (usec): 00:41:10.772 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11338], 00:41:10.772 | 30.00th=[11994], 40.00th=[13304], 50.00th=[14353], 60.00th=[14877], 00:41:10.772 | 70.00th=[16057], 80.00th=[17171], 90.00th=[19006], 95.00th=[22676], 00:41:10.772 | 99.00th=[36439], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:41:10.772 | 99.99th=[43779] 00:41:10.772 write: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1005msec); 0 zone resets 00:41:10.772 slat (usec): min=4, max=28560, avg=107.02, stdev=919.62 00:41:10.772 clat (usec): min=1134, max=48267, avg=14681.42, stdev=6599.14 00:41:10.772 lat (usec): min=3133, max=48287, avg=14788.44, stdev=6655.31 00:41:10.772 clat percentiles (usec): 00:41:10.772 | 1.00th=[ 4621], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[ 9634], 00:41:10.772 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12518], 60.00th=[13960], 00:41:10.772 | 70.00th=[16450], 80.00th=[17957], 90.00th=[23200], 95.00th=[31065], 00:41:10.772 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35914], 99.95th=[43779], 00:41:10.772 | 99.99th=[48497] 00:41:10.772 bw ( KiB/s): min=16952, max=18424, per=26.41%, avg=17688.00, stdev=1040.86, samples=2 00:41:10.772 iops : min= 4238, max= 4606, avg=4422.00, stdev=260.22, samples=2 00:41:10.772 lat (msec) : 2=0.02%, 4=0.20%, 10=14.71%, 20=73.73%, 50=11.33% 00:41:10.772 cpu : usr=3.49%, sys=6.18%, ctx=250, majf=0, minf=1 00:41:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:10.773 issued rwts: total=4096,4550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:10.773 job2: (groupid=0, jobs=1): err= 0: pid=861532: Tue Nov 5 12:55:39 2024 00:41:10.773 read: IOPS=3492, BW=13.6MiB/s (14.3MB/s)(14.3MiB/1046msec) 00:41:10.773 slat (usec): min=2, max=25559, avg=136.61, stdev=1089.18 00:41:10.773 clat (usec): min=6091, max=83916, avg=17670.70, stdev=12315.46 00:41:10.773 lat (usec): min=6095, max=83933, avg=17807.31, stdev=12403.31 00:41:10.773 clat percentiles (usec): 00:41:10.773 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11731], 00:41:10.773 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[13566], 00:41:10.773 | 70.00th=[14484], 80.00th=[21365], 90.00th=[32900], 95.00th=[46924], 00:41:10.773 | 99.00th=[65274], 99.50th=[83362], 99.90th=[83362], 99.95th=[84411], 00:41:10.773 | 99.99th=[84411] 00:41:10.773 write: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1046msec); 0 zone resets 00:41:10.773 slat (usec): min=2, max=23607, avg=108.62, stdev=866.41 00:41:10.773 clat (usec): min=232, max=99075, avg=16543.93, stdev=12290.09 00:41:10.773 lat (usec): min=255, max=109491, avg=16652.55, stdev=12343.34 00:41:10.773 clat percentiles (usec): 00:41:10.773 | 1.00th=[ 2278], 5.00th=[ 7046], 10.00th=[ 8586], 20.00th=[10028], 00:41:10.773 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:41:10.773 | 70.00th=[16581], 80.00th=[20055], 90.00th=[27395], 95.00th=[34866], 00:41:10.773 | 99.00th=[98042], 99.50th=[98042], 99.90th=[99091], 99.95th=[99091], 00:41:10.773 | 99.99th=[99091] 00:41:10.773 bw ( KiB/s): min=15912, max=16384, per=24.11%, avg=16148.00, stdev=333.75, samples=2 00:41:10.773 iops : min= 3978, max= 4096, avg=4037.00, stdev=83.44, samples=2 00:41:10.773 lat (usec) : 250=0.01%, 1000=0.10% 00:41:10.773 lat (msec) : 2=0.36%, 4=0.57%, 10=13.95%, 20=65.01%, 50=16.85% 00:41:10.773 lat (msec) : 100=3.14% 00:41:10.773 cpu : usr=2.87%, sys=6.32%, ctx=265, majf=0, minf=1 00:41:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:10.773 issued rwts: total=3653,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:10.773 job3: (groupid=0, jobs=1): err= 0: pid=861533: Tue Nov 5 12:55:39 2024 00:41:10.773 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:41:10.773 slat (usec): min=2, max=22670, avg=106.90, stdev=766.54 00:41:10.773 clat (usec): min=6878, max=54969, avg=13809.43, stdev=5446.33 00:41:10.773 lat (usec): min=6883, max=54975, avg=13916.33, stdev=5488.96 00:41:10.773 clat percentiles (usec): 00:41:10.773 | 1.00th=[ 7635], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11731], 00:41:10.773 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:41:10.773 | 70.00th=[13173], 80.00th=[14484], 90.00th=[17171], 95.00th=[19268], 00:41:10.773 | 99.00th=[44827], 99.50th=[45351], 99.90th=[54789], 99.95th=[54789], 00:41:10.773 | 99.99th=[54789] 00:41:10.773 write: IOPS=4746, BW=18.5MiB/s (19.4MB/s)(18.7MiB/1006msec); 0 zone resets 00:41:10.773 slat (usec): min=3, max=10811, avg=93.58, stdev=458.61 00:41:10.773 clat (usec): min=1137, max=34644, avg=13358.56, stdev=3267.59 00:41:10.773 lat (usec): min=1146, max=34651, avg=13452.13, stdev=3304.17 00:41:10.773 clat percentiles (usec): 00:41:10.773 | 1.00th=[ 7242], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11863], 00:41:10.773 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:41:10.773 | 70.00th=[13173], 80.00th=[14091], 90.00th=[16909], 95.00th=[19006], 00:41:10.773 | 99.00th=[27919], 99.50th=[29230], 99.90th=[30802], 99.95th=[33817], 00:41:10.773 | 99.99th=[34866] 00:41:10.773 bw ( KiB/s): min=16656, max=20528, per=27.75%, avg=18592.00, stdev=2737.92, samples=2 00:41:10.773 iops : min= 4164, max= 5132, avg=4648.00, stdev=684.48, samples=2 00:41:10.773 lat (msec) : 2=0.06%, 10=6.27%, 20=89.10%, 50=4.47%, 100=0.11% 00:41:10.773 cpu : usr=7.06%, sys=11.64%, ctx=474, majf=0, minf=1 00:41:10.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:41:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:10.773 issued rwts: total=4608,4775,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.773 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:10.773 00:41:10.773 Run status group 0 (all jobs): 00:41:10.773 READ: bw=60.3MiB/s (63.2MB/s), 13.6MiB/s-17.9MiB/s (14.3MB/s-18.8MB/s), io=63.1MiB (66.1MB), run=1002-1046msec 00:41:10.773 WRITE: bw=65.4MiB/s (68.6MB/s), 15.3MiB/s-18.5MiB/s (16.0MB/s-19.4MB/s), io=68.4MiB (71.7MB), run=1002-1046msec 00:41:10.773 00:41:10.773 Disk stats (read/write): 00:41:10.773 nvme0n1: ios=3270/3584, merge=0/0, ticks=18004/25048, in_queue=43052, util=87.27% 00:41:10.773 nvme0n2: ios=3633/3614, merge=0/0, ticks=51287/53164, in_queue=104451, util=89.44% 00:41:10.773 nvme0n3: ios=3390/3584, merge=0/0, ticks=23743/26442, in_queue=50185, util=92.92% 00:41:10.773 nvme0n4: ios=3766/4096, merge=0/0, ticks=26208/28866, in_queue=55074, util=95.91% 00:41:10.773 12:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:10.773 12:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=861667 00:41:10.773 12:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:10.773 12:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:10.773 [global] 00:41:10.773 thread=1 00:41:10.773 invalidate=1 00:41:10.773 rw=read 00:41:10.773 time_based=1 00:41:10.773 runtime=10 00:41:10.773 ioengine=libaio 00:41:10.773 direct=1 00:41:10.773 bs=4096 00:41:10.773 iodepth=1 00:41:10.773 norandommap=1 00:41:10.773 numjobs=1 00:41:10.773 00:41:10.773 [job0] 00:41:10.773 filename=/dev/nvme0n1 00:41:10.773 [job1] 00:41:10.773 filename=/dev/nvme0n2 00:41:10.773 [job2] 00:41:10.773 filename=/dev/nvme0n3 00:41:10.773 [job3] 00:41:10.773 filename=/dev/nvme0n4 00:41:10.773 Could not set queue depth (nvme0n1) 00:41:10.773 Could not set queue depth (nvme0n2) 00:41:10.773 Could not set queue depth (nvme0n3) 00:41:10.773 Could not set queue depth (nvme0n4) 00:41:11.031 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.031 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.031 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.031 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:11.031 fio-3.35 00:41:11.031 Starting 4 threads 00:41:14.310 12:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:14.310 12:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:14.310 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=6012928, buflen=4096 00:41:14.310 fio: pid=861888, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:14.310 12:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.310 12:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:14.310 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=30310400, buflen=4096 00:41:14.310 fio: pid=861877, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:14.568 12:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.568 12:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:14.568 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=35627008, buflen=4096 00:41:14.568 fio: pid=861827, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:14.826 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:14.826 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:15.085 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=405504, buflen=4096 00:41:15.085 fio: pid=861840, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:15.085 00:41:15.086 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=861827: Tue Nov 5 12:55:44 2024 00:41:15.086 read: IOPS=2479, BW=9915KiB/s (10.2MB/s)(34.0MiB/3509msec) 00:41:15.086 slat (usec): min=4, max=13985, avg= 9.57, stdev=149.93 00:41:15.086 clat (usec): min=189, max=41314, avg=389.18, stdev=2504.82 00:41:15.086 lat (usec): min=194, max=41322, avg=398.74, stdev=2509.74 00:41:15.086 clat percentiles (usec): 00:41:15.086 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:41:15.086 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 229], 60.00th=[ 237], 00:41:15.086 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 314], 00:41:15.086 | 99.00th=[ 392], 99.50th=[ 457], 99.90th=[41157], 99.95th=[41157], 00:41:15.086 | 99.99th=[41157] 00:41:15.086 bw ( KiB/s): min= 104, max=18352, per=60.86%, avg=11293.33, stdev=7974.78, samples=6 00:41:15.086 iops : min= 26, max= 4588, avg=2823.33, stdev=1993.70, samples=6 00:41:15.086 lat (usec) : 250=79.45%, 500=20.14%, 750=0.01% 00:41:15.086 lat (msec) : 2=0.01%, 50=0.38% 00:41:15.086 cpu : usr=0.66%, sys=2.57%, ctx=8701, majf=0, minf=2 00:41:15.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 issued rwts: total=8699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:15.086 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=861840: Tue Nov 5 12:55:44 2024 00:41:15.086 read: IOPS=26, BW=104KiB/s (106kB/s)(396KiB/3808msec) 00:41:15.086 slat (usec): min=9, max=8923, avg=106.78, stdev=890.63 00:41:15.086 clat (usec): min=250, max=41612, avg=38114.63, stdev=10469.48 00:41:15.086 lat (usec): min=267, max=50003, avg=38222.20, stdev=10531.88 00:41:15.086 clat percentiles (usec): 00:41:15.086 | 1.00th=[ 251], 5.00th=[ 334], 10.00th=[41157], 20.00th=[41157], 00:41:15.086 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:15.086 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:15.086 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:15.086 | 99.99th=[41681] 00:41:15.086 bw ( KiB/s): min= 96, max= 134, per=0.56%, avg=103.71, stdev=13.88, samples=7 00:41:15.086 iops : min= 24, max= 33, avg=25.86, stdev= 3.29, samples=7 00:41:15.086 lat (usec) : 500=6.00%, 1000=1.00% 00:41:15.086 lat (msec) : 50=92.00% 00:41:15.086 cpu : usr=0.08%, sys=0.00%, ctx=103, majf=0, minf=2 00:41:15.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:15.086 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=861877: Tue Nov 5 12:55:44 2024 00:41:15.086 read: IOPS=2312, BW=9250KiB/s (9472kB/s)(28.9MiB/3200msec) 00:41:15.086 slat (nsec): min=5653, max=63050, avg=10875.65, stdev=6337.02 00:41:15.086 clat (usec): min=211, max=41998, avg=415.76, stdev=2185.28 00:41:15.086 lat (usec): min=218, max=42013, avg=426.63, stdev=2185.66 00:41:15.086 clat percentiles (usec): 00:41:15.086 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 247], 00:41:15.086 | 30.00th=[ 255], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:41:15.086 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 355], 00:41:15.086 | 99.00th=[ 383], 99.50th=[ 437], 99.90th=[41157], 99.95th=[41157], 00:41:15.086 | 99.99th=[42206] 00:41:15.086 bw ( KiB/s): min= 104, max=15352, per=48.35%, avg=8972.00, stdev=6045.36, samples=6 00:41:15.086 iops : min= 26, max= 3838, avg=2243.00, stdev=1511.34, samples=6 00:41:15.086 lat (usec) : 250=24.24%, 500=75.41%, 750=0.03%, 1000=0.01% 00:41:15.086 lat (msec) : 50=0.30% 00:41:15.086 cpu : usr=1.50%, sys=3.81%, ctx=7403, majf=0, minf=1 00:41:15.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 issued rwts: total=7401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:15.086 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=861888: Tue Nov 5 12:55:44 2024 00:41:15.086 read: IOPS=504, BW=2018KiB/s (2066kB/s)(5872KiB/2910msec) 00:41:15.086 slat (nsec): min=5425, max=37839, avg=6804.12, stdev=2494.01 00:41:15.086 clat (usec): min=243, max=42036, avg=1957.79, stdev=8011.47 00:41:15.086 lat (usec): min=249, max=42051, avg=1964.59, stdev=8013.30 00:41:15.086 clat percentiles (usec): 00:41:15.086 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:41:15.086 | 30.00th=[ 314], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:41:15.086 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 343], 95.00th=[ 371], 00:41:15.086 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:41:15.086 | 99.99th=[42206] 00:41:15.086 bw ( KiB/s): min= 96, max= 8952, per=10.07%, avg=1868.80, stdev=3959.63, samples=5 00:41:15.086 iops : min= 24, max= 2238, avg=467.20, stdev=989.91, samples=5 00:41:15.086 lat (usec) : 250=0.14%, 500=95.64%, 750=0.14% 00:41:15.086 lat (msec) : 50=4.02% 00:41:15.086 cpu : usr=0.10%, sys=0.58%, ctx=1469, majf=0, minf=1 00:41:15.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:15.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:15.086 issued rwts: total=1469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:15.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:15.086 00:41:15.086 Run status group 0 (all jobs): 00:41:15.086 READ: bw=18.1MiB/s (19.0MB/s), 104KiB/s-9915KiB/s (106kB/s-10.2MB/s), io=69.0MiB (72.4MB), run=2910-3808msec 00:41:15.086 00:41:15.086 Disk stats (read/write): 00:41:15.086 nvme0n1: ios=8694/0, merge=0/0, ticks=3131/0, in_queue=3131, util=95.45% 00:41:15.086 nvme0n2: ios=135/0, merge=0/0, ticks=4446/0, in_queue=4446, util=99.65% 00:41:15.086 nvme0n3: ios=7139/0, merge=0/0, ticks=3782/0, in_queue=3782, util=99.75% 00:41:15.086 nvme0n4: ios=1467/0, merge=0/0, ticks=2824/0, in_queue=2824, util=96.74% 00:41:15.344 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:15.344 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:15.601 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:15.602 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:15.859 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:15.859 12:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:16.117 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:16.117 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 861667 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:16.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:16.374 nvmf hotplug test: fio failed as expected 00:41:16.374 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:16.941 rmmod nvme_tcp 00:41:16.941 rmmod nvme_fabrics 00:41:16.941 rmmod nvme_keyring 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 859780 ']' 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 859780 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 859780 ']' 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 859780 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:16.941 12:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 859780 00:41:16.941 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:16.941 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:16.941 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 859780' 00:41:16.941 killing process with pid 859780 00:41:16.941 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 859780 00:41:16.941 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 859780 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:17.200 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:17.201 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:17.201 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:17.201 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:17.201 12:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.101 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:19.101 00:41:19.101 real 0m23.759s 00:41:19.101 user 1m7.465s 00:41:19.101 sys 0m9.712s 00:41:19.101 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:19.102 ************************************ 00:41:19.102 END TEST nvmf_fio_target 00:41:19.102 ************************************ 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:19.102 ************************************ 00:41:19.102 START TEST nvmf_bdevio 00:41:19.102 ************************************ 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:19.102 * Looking for test storage... 00:41:19.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:41:19.102 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:19.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.361 --rc genhtml_branch_coverage=1 00:41:19.361 --rc genhtml_function_coverage=1 00:41:19.361 --rc genhtml_legend=1 00:41:19.361 --rc geninfo_all_blocks=1 00:41:19.361 --rc geninfo_unexecuted_blocks=1 00:41:19.361 00:41:19.361 ' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:19.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.361 --rc genhtml_branch_coverage=1 00:41:19.361 --rc genhtml_function_coverage=1 00:41:19.361 --rc genhtml_legend=1 00:41:19.361 --rc geninfo_all_blocks=1 00:41:19.361 --rc geninfo_unexecuted_blocks=1 00:41:19.361 00:41:19.361 ' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:19.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.361 --rc genhtml_branch_coverage=1 00:41:19.361 --rc genhtml_function_coverage=1 00:41:19.361 --rc genhtml_legend=1 00:41:19.361 --rc geninfo_all_blocks=1 00:41:19.361 --rc geninfo_unexecuted_blocks=1 00:41:19.361 00:41:19.361 ' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:19.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.361 --rc genhtml_branch_coverage=1 00:41:19.361 --rc genhtml_function_coverage=1 00:41:19.361 --rc genhtml_legend=1 00:41:19.361 --rc geninfo_all_blocks=1 00:41:19.361 --rc geninfo_unexecuted_blocks=1 00:41:19.361 00:41:19.361 ' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.361 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:19.362 12:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:21.891 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:21.892 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:21.892 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:21.892 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:21.892 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:21.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:21.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:41:21.892 00:41:21.892 --- 10.0.0.2 ping statistics --- 00:41:21.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:21.892 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:21.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:21.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:41:21.892 00:41:21.892 --- 10.0.0.1 ping statistics --- 00:41:21.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:21.892 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=864499 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 864499 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 864499 ']' 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:21.892 12:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.892 [2024-11-05 12:55:50.778743] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:21.892 [2024-11-05 12:55:50.779806] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:41:21.892 [2024-11-05 12:55:50.779881] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:21.892 [2024-11-05 12:55:50.853211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:21.893 [2024-11-05 12:55:50.899198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:21.893 [2024-11-05 12:55:50.899259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:21.893 [2024-11-05 12:55:50.899288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:21.893 [2024-11-05 12:55:50.899299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:21.893 [2024-11-05 12:55:50.899308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:21.893 [2024-11-05 12:55:50.901020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:21.893 [2024-11-05 12:55:50.901084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:21.893 [2024-11-05 12:55:50.901132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:21.893 [2024-11-05 12:55:50.901135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:21.893 [2024-11-05 12:55:50.981671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:21.893 [2024-11-05 12:55:50.981903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:21.893 [2024-11-05 12:55:50.982210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:21.893 [2024-11-05 12:55:50.982700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:21.893 [2024-11-05 12:55:50.982962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.893 [2024-11-05 12:55:51.033840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.893 Malloc0 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:21.893 [2024-11-05 12:55:51.106064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:21.893 { 00:41:21.893 "params": { 00:41:21.893 "name": "Nvme$subsystem", 00:41:21.893 "trtype": "$TEST_TRANSPORT", 00:41:21.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:21.893 "adrfam": "ipv4", 00:41:21.893 "trsvcid": "$NVMF_PORT", 00:41:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:21.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:21.893 "hdgst": ${hdgst:-false}, 00:41:21.893 "ddgst": ${ddgst:-false} 00:41:21.893 }, 00:41:21.893 "method": "bdev_nvme_attach_controller" 00:41:21.893 } 00:41:21.893 EOF 00:41:21.893 )") 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:21.893 12:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:21.893 "params": { 00:41:21.893 "name": "Nvme1", 00:41:21.893 "trtype": "tcp", 00:41:21.893 "traddr": "10.0.0.2", 00:41:21.893 "adrfam": "ipv4", 00:41:21.893 "trsvcid": "4420", 00:41:21.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:21.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:21.893 "hdgst": false, 00:41:21.893 "ddgst": false 00:41:21.893 }, 00:41:21.893 "method": "bdev_nvme_attach_controller" 00:41:21.893 }' 00:41:22.151 [2024-11-05 12:55:51.157669] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:41:22.151 [2024-11-05 12:55:51.157735] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864525 ] 00:41:22.151 [2024-11-05 12:55:51.227208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:22.151 [2024-11-05 12:55:51.279344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:22.151 [2024-11-05 12:55:51.279394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:22.151 [2024-11-05 12:55:51.279398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.408 I/O targets: 00:41:22.408 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:22.408 00:41:22.408 00:41:22.408 CUnit - A unit testing framework for C - Version 2.1-3 00:41:22.408 http://cunit.sourceforge.net/ 00:41:22.408 00:41:22.408 00:41:22.408 Suite: bdevio tests on: Nvme1n1 00:41:22.408 Test: blockdev write read block ...passed 00:41:22.666 Test: blockdev write zeroes read block ...passed 00:41:22.666 Test: blockdev write zeroes read no split ...passed 00:41:22.666 Test: blockdev write zeroes read split ...passed 00:41:22.666 Test: blockdev write zeroes read split partial ...passed 00:41:22.666 Test: blockdev reset ...[2024-11-05 12:55:51.683175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:22.666 [2024-11-05 12:55:51.683293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b9ac0 (9): Bad file descriptor 00:41:22.666 [2024-11-05 12:55:51.736023] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:22.666 passed 00:41:22.666 Test: blockdev write read 8 blocks ...passed 00:41:22.666 Test: blockdev write read size > 128k ...passed 00:41:22.666 Test: blockdev write read invalid size ...passed 00:41:22.666 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:22.666 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:22.666 Test: blockdev write read max offset ...passed 00:41:22.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:22.924 Test: blockdev writev readv 8 blocks ...passed 00:41:22.924 Test: blockdev writev readv 30 x 1block ...passed 00:41:22.924 Test: blockdev writev readv block ...passed 00:41:22.924 Test: blockdev writev readv size > 128k ...passed 00:41:22.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:22.924 Test: blockdev comparev and writev ...[2024-11-05 12:55:51.991776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.991811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.991836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.991855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.992304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.992328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.992350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.992367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.992785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.992808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.992829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.992845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.993277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.993301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:51.993322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:22.924 [2024-11-05 12:55:51.993338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:22.924 passed 00:41:22.924 Test: blockdev nvme passthru rw ...passed 00:41:22.924 Test: blockdev nvme passthru vendor specific ...[2024-11-05 12:55:52.075160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:22.924 [2024-11-05 12:55:52.075187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:52.075333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:22.924 [2024-11-05 12:55:52.075357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:52.075501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:22.924 [2024-11-05 12:55:52.075524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:22.924 [2024-11-05 12:55:52.075667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:22.924 [2024-11-05 12:55:52.075689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:22.924 passed 00:41:22.924 Test: blockdev nvme admin passthru ...passed 00:41:22.924 Test: blockdev copy ...passed 00:41:22.924 00:41:22.924 Run Summary: Type Total Ran Passed Failed Inactive 00:41:22.924 suites 1 1 n/a 0 0 00:41:22.924 tests 23 23 23 0 0 00:41:22.924 asserts 152 152 152 0 n/a 00:41:22.924 00:41:22.924 Elapsed time = 1.123 seconds 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:23.182 rmmod nvme_tcp 00:41:23.182 rmmod nvme_fabrics 00:41:23.182 rmmod nvme_keyring 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 864499 ']' 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 864499 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 864499 ']' 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 864499 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 864499 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 864499' 00:41:23.182 killing process with pid 864499 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 864499 00:41:23.182 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 864499 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:23.440 12:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:25.977 12:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:25.977 00:41:25.977 real 0m6.354s 00:41:25.977 user 0m8.443s 00:41:25.977 sys 0m2.519s 00:41:25.977 12:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:25.977 12:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:25.977 ************************************ 00:41:25.977 END TEST nvmf_bdevio 00:41:25.977 ************************************ 00:41:25.977 12:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:25.977 00:41:25.977 real 3m54.650s 00:41:25.977 user 8m49.228s 00:41:25.977 sys 1m24.441s 00:41:25.977 12:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:25.977 12:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:25.977 ************************************ 00:41:25.977 END TEST nvmf_target_core_interrupt_mode 00:41:25.977 ************************************ 00:41:25.977 12:55:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:25.978 12:55:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:25.978 12:55:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:25.978 12:55:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:25.978 ************************************ 00:41:25.978 START TEST nvmf_interrupt 00:41:25.978 ************************************ 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:25.978 * Looking for test storage... 00:41:25.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.978 --rc genhtml_branch_coverage=1 00:41:25.978 --rc genhtml_function_coverage=1 00:41:25.978 --rc genhtml_legend=1 00:41:25.978 --rc geninfo_all_blocks=1 00:41:25.978 --rc geninfo_unexecuted_blocks=1 00:41:25.978 00:41:25.978 ' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.978 --rc genhtml_branch_coverage=1 00:41:25.978 --rc genhtml_function_coverage=1 00:41:25.978 --rc genhtml_legend=1 00:41:25.978 --rc geninfo_all_blocks=1 00:41:25.978 --rc geninfo_unexecuted_blocks=1 00:41:25.978 00:41:25.978 ' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.978 --rc genhtml_branch_coverage=1 00:41:25.978 --rc genhtml_function_coverage=1 00:41:25.978 --rc genhtml_legend=1 00:41:25.978 --rc geninfo_all_blocks=1 00:41:25.978 --rc geninfo_unexecuted_blocks=1 00:41:25.978 00:41:25.978 ' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:25.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:25.978 --rc genhtml_branch_coverage=1 00:41:25.978 --rc genhtml_function_coverage=1 00:41:25.978 --rc genhtml_legend=1 00:41:25.978 --rc geninfo_all_blocks=1 00:41:25.978 --rc geninfo_unexecuted_blocks=1 00:41:25.978 00:41:25.978 ' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:25.978 12:55:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:25.979 12:55:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:27.880 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:27.880 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:27.880 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.880 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:27.880 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:27.881 12:55:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:27.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:27.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:41:27.881 00:41:27.881 --- 10.0.0.2 ping statistics --- 00:41:27.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.881 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:27.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:27.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:41:27.881 00:41:27.881 --- 10.0.0.1 ping statistics --- 00:41:27.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.881 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=866614 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 866614 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 866614 ']' 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:27.881 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.139 [2024-11-05 12:55:57.155261] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:28.139 [2024-11-05 12:55:57.156405] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:41:28.139 [2024-11-05 12:55:57.156477] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:28.139 [2024-11-05 12:55:57.228996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:28.139 [2024-11-05 12:55:57.274056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:28.139 [2024-11-05 12:55:57.274115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:28.139 [2024-11-05 12:55:57.274129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:28.139 [2024-11-05 12:55:57.274140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:28.139 [2024-11-05 12:55:57.274149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:28.139 [2024-11-05 12:55:57.275440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.139 [2024-11-05 12:55:57.275447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.139 [2024-11-05 12:55:57.360924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:28.139 [2024-11-05 12:55:57.360972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:28.139 [2024-11-05 12:55:57.361217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:28.398 5000+0 records in 00:41:28.398 5000+0 records out 00:41:28.398 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0119824 s, 855 MB/s 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.398 AIO0 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.398 [2024-11-05 12:55:57.460096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:28.398 [2024-11-05 12:55:57.484335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 866614 0 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 866614 0 idle 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:28.398 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866614 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.25 reactor_0' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866614 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.25 reactor_0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 866614 1 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 866614 1 idle 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866620 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866620 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=866774 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 866614 0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 866614 0 busy 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:28.656 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866614 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:00.25 reactor_0' 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866614 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:00.25 reactor_0 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:28.915 12:55:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:41:29.851 12:55:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:41:29.851 12:55:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.851 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:29.851 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866614 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:02.54 reactor_0' 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866614 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:02.54 reactor_0 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 866614 1 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 866614 1 busy 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866620 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:01.31 reactor_1' 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866620 root 20 0 128.2g 48000 34176 R 99.9 0.1 0:01.31 reactor_1 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:30.154 12:55:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 866774 00:41:40.152 Initializing NVMe Controllers 00:41:40.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:40.152 Controller IO queue size 256, less than required. 00:41:40.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:40.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:40.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:40.152 Initialization complete. Launching workers. 00:41:40.152 ======================================================== 00:41:40.152 Latency(us) 00:41:40.152 Device Information : IOPS MiB/s Average min max 00:41:40.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13803.19 53.92 18559.13 4494.74 22785.85 00:41:40.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13717.39 53.58 18675.64 4351.75 23374.36 00:41:40.152 ======================================================== 00:41:40.152 Total : 27520.59 107.50 18617.21 4351.75 23374.36 00:41:40.152 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 866614 0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 866614 0 idle 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866614 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:20.20 reactor_0' 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866614 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:20.20 reactor_0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 866614 1 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 866614 1 idle 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866620 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:09.97 reactor_1' 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866620 root 20 0 128.2g 48000 34176 S 0.0 0.1 0:09.97 reactor_1 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:41:40.152 12:56:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 866614 0 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 866614 0 idle 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:41.531 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:41.789 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866614 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:20.28 reactor_0' 00:41:41.789 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866614 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:20.28 reactor_0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 866614 1 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 866614 1 idle 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=866614 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 866614 -w 256 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 866620 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:10.01 reactor_1' 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 866620 root 20 0 128.2g 60288 34176 S 0.0 0.1 0:10.01 reactor_1 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:41.790 12:56:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:42.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:42.048 rmmod nvme_tcp 00:41:42.048 rmmod nvme_fabrics 00:41:42.048 rmmod nvme_keyring 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 866614 ']' 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 866614 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 866614 ']' 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 866614 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 866614 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 866614' 00:41:42.048 killing process with pid 866614 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 866614 00:41:42.048 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 866614 00:41:42.306 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:42.306 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:42.306 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:42.307 12:56:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.213 12:56:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:44.213 00:41:44.213 real 0m18.701s 00:41:44.213 user 0m37.598s 00:41:44.213 sys 0m6.216s 00:41:44.213 12:56:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:44.213 12:56:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:44.213 ************************************ 00:41:44.213 END TEST nvmf_interrupt 00:41:44.213 ************************************ 00:41:44.213 00:41:44.213 real 32m50.349s 00:41:44.213 user 86m47.264s 00:41:44.213 sys 8m7.035s 00:41:44.213 12:56:13 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:44.213 12:56:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.213 ************************************ 00:41:44.213 END TEST nvmf_tcp 00:41:44.213 ************************************ 00:41:44.471 12:56:13 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:41:44.471 12:56:13 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:44.471 12:56:13 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:41:44.471 12:56:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:44.471 12:56:13 -- common/autotest_common.sh@10 -- # set +x 00:41:44.471 ************************************ 00:41:44.471 START TEST spdkcli_nvmf_tcp 00:41:44.471 ************************************ 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:44.471 * Looking for test storage... 00:41:44.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:44.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.471 --rc genhtml_branch_coverage=1 00:41:44.471 --rc genhtml_function_coverage=1 00:41:44.471 --rc genhtml_legend=1 00:41:44.471 --rc geninfo_all_blocks=1 00:41:44.471 --rc geninfo_unexecuted_blocks=1 00:41:44.471 00:41:44.471 ' 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:44.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.471 --rc genhtml_branch_coverage=1 00:41:44.471 --rc genhtml_function_coverage=1 00:41:44.471 --rc genhtml_legend=1 00:41:44.471 --rc geninfo_all_blocks=1 00:41:44.471 --rc geninfo_unexecuted_blocks=1 00:41:44.471 00:41:44.471 ' 00:41:44.471 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:44.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.471 --rc genhtml_branch_coverage=1 00:41:44.471 --rc genhtml_function_coverage=1 00:41:44.471 --rc genhtml_legend=1 00:41:44.471 --rc geninfo_all_blocks=1 00:41:44.471 --rc geninfo_unexecuted_blocks=1 00:41:44.471 00:41:44.472 ' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:44.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.472 --rc genhtml_branch_coverage=1 00:41:44.472 --rc genhtml_function_coverage=1 00:41:44.472 --rc genhtml_legend=1 00:41:44.472 --rc geninfo_all_blocks=1 00:41:44.472 --rc geninfo_unexecuted_blocks=1 00:41:44.472 00:41:44.472 ' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:44.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=868787 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 868787 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 868787 ']' 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:44.472 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.472 [2024-11-05 12:56:13.685826] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:41:44.472 [2024-11-05 12:56:13.685945] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868787 ] 00:41:44.730 [2024-11-05 12:56:13.751547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:44.730 [2024-11-05 12:56:13.798502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.730 [2024-11-05 12:56:13.798506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.730 12:56:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:44.730 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:44.730 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:44.730 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:44.730 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:44.730 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:44.730 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:44.730 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:44.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:44.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:44.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:44.730 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:44.730 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:44.730 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:44.731 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:44.731 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:44.731 ' 00:41:48.024 [2024-11-05 12:56:16.548478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:48.594 [2024-11-05 12:56:17.820884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:51.125 [2024-11-05 12:56:20.163974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:53.027 [2024-11-05 12:56:22.186225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:54.931 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:54.931 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:54.931 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:54.931 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:54.931 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:54.931 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:54.931 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:54.931 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:54.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:54.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:54.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:54.931 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:54.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:54.932 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:54.932 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:54.932 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:54.932 12:56:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:55.191 12:56:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:55.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:55.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:55.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:55.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:55.191 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:55.191 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:55.191 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:55.191 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:55.191 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:55.191 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:55.191 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:55.191 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:55.191 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:55.191 ' 00:42:00.465 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:00.465 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:00.465 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:00.465 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:00.465 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:00.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:00.466 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:00.466 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:00.466 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:00.466 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:00.466 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:00.466 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:00.466 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:00.466 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 868787 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 868787 ']' 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 868787 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 868787 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 868787' 00:42:00.725 killing process with pid 868787 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 868787 00:42:00.725 12:56:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 868787 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 868787 ']' 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 868787 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 868787 ']' 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 868787 00:42:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (868787) - No such process 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 868787 is not found' 00:42:00.984 Process with pid 868787 is not found 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:00.984 00:42:00.984 real 0m16.560s 00:42:00.984 user 0m35.351s 00:42:00.984 sys 0m0.749s 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:00.984 12:56:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:00.984 ************************************ 00:42:00.984 END TEST spdkcli_nvmf_tcp 00:42:00.984 ************************************ 00:42:00.984 12:56:30 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:00.984 12:56:30 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:42:00.984 12:56:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:00.984 12:56:30 -- common/autotest_common.sh@10 -- # set +x 00:42:00.984 ************************************ 00:42:00.984 START TEST nvmf_identify_passthru 00:42:00.984 ************************************ 00:42:00.984 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:00.984 * Looking for test storage... 00:42:00.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:00.984 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:00.984 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:42:00.984 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:01.242 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:01.242 12:56:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.243 --rc genhtml_branch_coverage=1 00:42:01.243 --rc genhtml_function_coverage=1 00:42:01.243 --rc genhtml_legend=1 00:42:01.243 --rc geninfo_all_blocks=1 00:42:01.243 --rc geninfo_unexecuted_blocks=1 00:42:01.243 00:42:01.243 ' 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.243 --rc genhtml_branch_coverage=1 00:42:01.243 --rc genhtml_function_coverage=1 00:42:01.243 --rc genhtml_legend=1 00:42:01.243 --rc geninfo_all_blocks=1 00:42:01.243 --rc geninfo_unexecuted_blocks=1 00:42:01.243 00:42:01.243 ' 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.243 --rc genhtml_branch_coverage=1 00:42:01.243 --rc genhtml_function_coverage=1 00:42:01.243 --rc genhtml_legend=1 00:42:01.243 --rc geninfo_all_blocks=1 00:42:01.243 --rc geninfo_unexecuted_blocks=1 00:42:01.243 00:42:01.243 ' 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:01.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.243 --rc genhtml_branch_coverage=1 00:42:01.243 --rc genhtml_function_coverage=1 00:42:01.243 --rc genhtml_legend=1 00:42:01.243 --rc geninfo_all_blocks=1 00:42:01.243 --rc geninfo_unexecuted_blocks=1 00:42:01.243 00:42:01.243 ' 00:42:01.243 12:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:01.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:01.243 12:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:01.243 12:56:30 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:01.243 12:56:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.243 12:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:01.243 12:56:30 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:01.243 12:56:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:03.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:03.159 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:03.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:03.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:03.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:03.160 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:03.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:03.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:42:03.418 00:42:03.418 --- 10.0.0.2 ping statistics --- 00:42:03.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.418 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:03.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:03.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:42:03.418 00:42:03.418 --- 10.0.0.1 ping statistics --- 00:42:03.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:03.418 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:03.418 12:56:32 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:03.418 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:03.418 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:03.418 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:03.419 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:42:03.419 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:42:03.419 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:42:03.419 12:56:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:42:03.419 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:42:03.419 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:42:03.419 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:03.419 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:03.419 12:56:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:07.611 12:56:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:42:07.611 12:56:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:42:07.611 12:56:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:07.611 12:56:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:11.803 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:11.803 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:11.803 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:11.803 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=873290 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:11.803 12:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 873290 00:42:11.803 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 873290 ']' 00:42:11.803 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:11.804 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:11.804 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:11.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:11.804 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:11.804 12:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:11.804 [2024-11-05 12:56:41.042287] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:42:11.804 [2024-11-05 12:56:41.042389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:12.062 [2024-11-05 12:56:41.123933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:12.062 [2024-11-05 12:56:41.172394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:12.062 [2024-11-05 12:56:41.172466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:12.062 [2024-11-05 12:56:41.172494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:12.062 [2024-11-05 12:56:41.172505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:12.062 [2024-11-05 12:56:41.172514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:12.062 [2024-11-05 12:56:41.174102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:12.062 [2024-11-05 12:56:41.174184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:12.062 [2024-11-05 12:56:41.174162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:12.062 [2024-11-05 12:56:41.174190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:42:12.062 12:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:12.062 INFO: Log level set to 20 00:42:12.062 INFO: Requests: 00:42:12.062 { 00:42:12.062 "jsonrpc": "2.0", 00:42:12.062 "method": "nvmf_set_config", 00:42:12.062 "id": 1, 00:42:12.062 "params": { 00:42:12.062 "admin_cmd_passthru": { 00:42:12.062 "identify_ctrlr": true 00:42:12.062 } 00:42:12.062 } 00:42:12.062 } 00:42:12.062 00:42:12.062 INFO: response: 00:42:12.062 { 00:42:12.062 "jsonrpc": "2.0", 00:42:12.062 "id": 1, 00:42:12.062 "result": true 00:42:12.062 } 00:42:12.062 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.062 12:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.062 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:12.062 INFO: Setting log level to 20 00:42:12.062 INFO: Setting log level to 20 00:42:12.062 INFO: Log level set to 20 00:42:12.062 INFO: Log level set to 20 00:42:12.062 INFO: Requests: 00:42:12.062 { 00:42:12.062 "jsonrpc": "2.0", 00:42:12.062 "method": "framework_start_init", 00:42:12.062 "id": 1 00:42:12.062 } 00:42:12.062 00:42:12.062 INFO: Requests: 00:42:12.062 { 00:42:12.062 "jsonrpc": "2.0", 00:42:12.062 "method": "framework_start_init", 00:42:12.062 "id": 1 00:42:12.062 } 00:42:12.062 00:42:12.319 [2024-11-05 12:56:41.384531] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:12.319 INFO: response: 00:42:12.319 { 00:42:12.319 "jsonrpc": "2.0", 00:42:12.319 "id": 1, 00:42:12.319 "result": true 00:42:12.319 } 00:42:12.319 00:42:12.319 INFO: response: 00:42:12.319 { 00:42:12.319 "jsonrpc": "2.0", 00:42:12.319 "id": 1, 00:42:12.319 "result": true 00:42:12.319 } 00:42:12.319 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.319 12:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:12.319 INFO: Setting log level to 40 00:42:12.319 INFO: Setting log level to 40 00:42:12.319 INFO: Setting log level to 40 00:42:12.319 [2024-11-05 12:56:41.394521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:12.319 12:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:12.319 12:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:12.319 12:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.604 Nvme0n1 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.604 [2024-11-05 12:56:44.291441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.604 [ 00:42:15.604 { 00:42:15.604 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:15.604 "subtype": "Discovery", 00:42:15.604 "listen_addresses": [], 00:42:15.604 "allow_any_host": true, 00:42:15.604 "hosts": [] 00:42:15.604 }, 00:42:15.604 { 00:42:15.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:15.604 "subtype": "NVMe", 00:42:15.604 "listen_addresses": [ 00:42:15.604 { 00:42:15.604 "trtype": "TCP", 00:42:15.604 "adrfam": "IPv4", 00:42:15.604 "traddr": "10.0.0.2", 00:42:15.604 "trsvcid": "4420" 00:42:15.604 } 00:42:15.604 ], 00:42:15.604 "allow_any_host": true, 00:42:15.604 "hosts": [], 00:42:15.604 "serial_number": "SPDK00000000000001", 00:42:15.604 "model_number": "SPDK bdev Controller", 00:42:15.604 "max_namespaces": 1, 00:42:15.604 "min_cntlid": 1, 00:42:15.604 "max_cntlid": 65519, 00:42:15.604 "namespaces": [ 00:42:15.604 { 00:42:15.604 "nsid": 1, 00:42:15.604 "bdev_name": "Nvme0n1", 00:42:15.604 "name": "Nvme0n1", 00:42:15.604 "nguid": "7D1DA4B59E9140069ECF60C5C4E371F5", 00:42:15.604 "uuid": "7d1da4b5-9e91-4006-9ecf-60c5c4e371f5" 00:42:15.604 } 00:42:15.604 ] 00:42:15.604 } 00:42:15.604 ] 00:42:15.604 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:15.604 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:15.862 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:15.862 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:42:15.862 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:15.862 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:15.862 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.862 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:15.862 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.862 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:15.862 12:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:15.862 rmmod nvme_tcp 00:42:15.862 rmmod nvme_fabrics 00:42:15.862 rmmod nvme_keyring 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 873290 ']' 00:42:15.862 12:56:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 873290 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 873290 ']' 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 873290 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 873290 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 873290' 00:42:15.863 killing process with pid 873290 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 873290 00:42:15.863 12:56:44 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 873290 00:42:17.238 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:17.238 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:17.238 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:17.238 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:17.497 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:17.497 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:17.497 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:17.497 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:17.497 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:17.497 12:56:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:17.497 12:56:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:17.497 12:56:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.403 12:56:48 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:19.403 00:42:19.403 real 0m18.434s 00:42:19.403 user 0m27.937s 00:42:19.403 sys 0m2.403s 00:42:19.403 12:56:48 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:19.403 12:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.403 ************************************ 00:42:19.403 END TEST nvmf_identify_passthru 00:42:19.403 ************************************ 00:42:19.403 12:56:48 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:19.403 12:56:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:19.403 12:56:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:19.403 12:56:48 -- common/autotest_common.sh@10 -- # set +x 00:42:19.403 ************************************ 00:42:19.403 START TEST nvmf_dif 00:42:19.403 ************************************ 00:42:19.403 12:56:48 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:19.403 * Looking for test storage... 00:42:19.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:19.403 12:56:48 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:19.403 12:56:48 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:42:19.403 12:56:48 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:19.663 12:56:48 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:19.663 12:56:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:19.663 12:56:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:19.663 12:56:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:19.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.664 --rc genhtml_branch_coverage=1 00:42:19.664 --rc genhtml_function_coverage=1 00:42:19.664 --rc genhtml_legend=1 00:42:19.664 --rc geninfo_all_blocks=1 00:42:19.664 --rc geninfo_unexecuted_blocks=1 00:42:19.664 00:42:19.664 ' 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:19.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.664 --rc genhtml_branch_coverage=1 00:42:19.664 --rc genhtml_function_coverage=1 00:42:19.664 --rc genhtml_legend=1 00:42:19.664 --rc geninfo_all_blocks=1 00:42:19.664 --rc geninfo_unexecuted_blocks=1 00:42:19.664 00:42:19.664 ' 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:19.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.664 --rc genhtml_branch_coverage=1 00:42:19.664 --rc genhtml_function_coverage=1 00:42:19.664 --rc genhtml_legend=1 00:42:19.664 --rc geninfo_all_blocks=1 00:42:19.664 --rc geninfo_unexecuted_blocks=1 00:42:19.664 00:42:19.664 ' 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:19.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:19.664 --rc genhtml_branch_coverage=1 00:42:19.664 --rc genhtml_function_coverage=1 00:42:19.664 --rc genhtml_legend=1 00:42:19.664 --rc geninfo_all_blocks=1 00:42:19.664 --rc geninfo_unexecuted_blocks=1 00:42:19.664 00:42:19.664 ' 00:42:19.664 12:56:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:19.664 12:56:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:19.664 12:56:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.664 12:56:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.664 12:56:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.664 12:56:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:19.664 12:56:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:19.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:19.664 12:56:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:19.664 12:56:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:19.664 12:56:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:19.664 12:56:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:19.664 12:56:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:19.664 12:56:48 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:19.664 12:56:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:21.609 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:21.609 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:21.609 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:21.609 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:21.609 12:56:50 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:21.610 12:56:50 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:21.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:21.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:42:21.867 00:42:21.867 --- 10.0.0.2 ping statistics --- 00:42:21.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:21.867 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:21.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:21.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:42:21.867 00:42:21.867 --- 10.0.0.1 ping statistics --- 00:42:21.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:21.867 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:21.867 12:56:50 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:22.802 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:42:22.802 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:22.802 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:42:22.802 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:42:22.802 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:42:22.802 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:42:22.802 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:42:22.802 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:42:22.802 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:42:22.802 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:42:22.802 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:42:22.802 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:42:22.802 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:42:22.802 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:42:22.802 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:42:22.802 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:42:22.802 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:23.061 12:56:52 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:23.061 12:56:52 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=876567 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:23.061 12:56:52 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 876567 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 876567 ']' 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:23.061 12:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:23.061 [2024-11-05 12:56:52.289875] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:42:23.061 [2024-11-05 12:56:52.289959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.319 [2024-11-05 12:56:52.368202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:23.319 [2024-11-05 12:56:52.415630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.319 [2024-11-05 12:56:52.415720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.319 [2024-11-05 12:56:52.415734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.319 [2024-11-05 12:56:52.415745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.319 [2024-11-05 12:56:52.415754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.319 [2024-11-05 12:56:52.416402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.319 12:56:52 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:23.319 12:56:52 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:42:23.319 12:56:52 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:23.319 12:56:52 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:23.319 12:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:23.319 12:56:52 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:23.319 12:56:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:23.319 12:56:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:23.319 12:56:52 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.319 12:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:23.319 [2024-11-05 12:56:52.557502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:23.578 12:56:52 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.578 12:56:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:23.578 12:56:52 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:23.578 12:56:52 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:23.578 12:56:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:23.578 ************************************ 00:42:23.578 START TEST fio_dif_1_default 00:42:23.578 ************************************ 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:23.578 bdev_null0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:23.578 [2024-11-05 12:56:52.613797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:23.578 { 00:42:23.578 "params": { 00:42:23.578 "name": "Nvme$subsystem", 00:42:23.578 "trtype": "$TEST_TRANSPORT", 00:42:23.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:23.578 "adrfam": "ipv4", 00:42:23.578 "trsvcid": "$NVMF_PORT", 00:42:23.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:23.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:23.578 "hdgst": ${hdgst:-false}, 00:42:23.578 "ddgst": ${ddgst:-false} 00:42:23.578 }, 00:42:23.578 "method": "bdev_nvme_attach_controller" 00:42:23.578 } 00:42:23.578 EOF 00:42:23.578 )") 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:23.578 "params": { 00:42:23.578 "name": "Nvme0", 00:42:23.578 "trtype": "tcp", 00:42:23.578 "traddr": "10.0.0.2", 00:42:23.578 "adrfam": "ipv4", 00:42:23.578 "trsvcid": "4420", 00:42:23.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:23.578 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:23.578 "hdgst": false, 00:42:23.578 "ddgst": false 00:42:23.578 }, 00:42:23.578 "method": "bdev_nvme_attach_controller" 00:42:23.578 }' 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:23.578 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:23.579 12:56:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:23.837 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:23.837 fio-3.35 00:42:23.837 Starting 1 thread 00:42:36.030 00:42:36.030 filename0: (groupid=0, jobs=1): err= 0: pid=876796: Tue Nov 5 12:57:03 2024 00:42:36.030 read: IOPS=98, BW=395KiB/s (404kB/s)(3952KiB/10011msec) 00:42:36.030 slat (nsec): min=6658, max=72860, avg=8721.00, stdev=4044.35 00:42:36.030 clat (usec): min=554, max=43550, avg=40500.90, stdev=4428.61 00:42:36.030 lat (usec): min=561, max=43587, avg=40509.62, stdev=4428.01 00:42:36.030 clat percentiles (usec): 00:42:36.030 | 1.00th=[ 660], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:36.030 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:36.030 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:36.030 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:42:36.030 | 99.99th=[43779] 00:42:36.030 bw ( KiB/s): min= 384, max= 448, per=99.55%, avg=393.60, stdev=18.28, samples=20 00:42:36.030 iops : min= 96, max= 112, avg=98.40, stdev= 4.57, samples=20 00:42:36.030 lat (usec) : 750=1.21% 00:42:36.030 lat (msec) : 50=98.79% 00:42:36.030 cpu : usr=90.69%, sys=9.02%, ctx=18, majf=0, minf=252 00:42:36.030 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:36.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:36.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:36.030 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:36.030 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:36.030 00:42:36.030 Run status group 0 (all jobs): 00:42:36.030 READ: bw=395KiB/s (404kB/s), 395KiB/s-395KiB/s (404kB/s-404kB/s), io=3952KiB (4047kB), run=10011-10011msec 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:36.030 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 00:42:36.031 real 0m11.147s 00:42:36.031 user 0m10.331s 00:42:36.031 sys 0m1.157s 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 ************************************ 00:42:36.031 END TEST fio_dif_1_default 00:42:36.031 ************************************ 00:42:36.031 12:57:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:36.031 12:57:03 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:36.031 12:57:03 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 ************************************ 00:42:36.031 START TEST fio_dif_1_multi_subsystems 00:42:36.031 ************************************ 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 bdev_null0 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 [2024-11-05 12:57:03.809098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 bdev_null1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:36.031 { 00:42:36.031 "params": { 00:42:36.031 "name": "Nvme$subsystem", 00:42:36.031 "trtype": "$TEST_TRANSPORT", 00:42:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:36.031 "adrfam": "ipv4", 00:42:36.031 "trsvcid": "$NVMF_PORT", 00:42:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:36.031 "hdgst": ${hdgst:-false}, 00:42:36.031 "ddgst": ${ddgst:-false} 00:42:36.031 }, 00:42:36.031 "method": "bdev_nvme_attach_controller" 00:42:36.031 } 00:42:36.031 EOF 00:42:36.031 )") 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:36.031 { 00:42:36.031 "params": { 00:42:36.031 "name": "Nvme$subsystem", 00:42:36.031 "trtype": "$TEST_TRANSPORT", 00:42:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:36.031 "adrfam": "ipv4", 00:42:36.031 "trsvcid": "$NVMF_PORT", 00:42:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:36.031 "hdgst": ${hdgst:-false}, 00:42:36.031 "ddgst": ${ddgst:-false} 00:42:36.031 }, 00:42:36.031 "method": "bdev_nvme_attach_controller" 00:42:36.031 } 00:42:36.031 EOF 00:42:36.031 )") 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:36.031 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:36.031 "params": { 00:42:36.031 "name": "Nvme0", 00:42:36.031 "trtype": "tcp", 00:42:36.031 "traddr": "10.0.0.2", 00:42:36.031 "adrfam": "ipv4", 00:42:36.031 "trsvcid": "4420", 00:42:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:36.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:36.031 "hdgst": false, 00:42:36.031 "ddgst": false 00:42:36.031 }, 00:42:36.031 "method": "bdev_nvme_attach_controller" 00:42:36.031 },{ 00:42:36.031 "params": { 00:42:36.031 "name": "Nvme1", 00:42:36.031 "trtype": "tcp", 00:42:36.031 "traddr": "10.0.0.2", 00:42:36.031 "adrfam": "ipv4", 00:42:36.031 "trsvcid": "4420", 00:42:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:36.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:36.031 "hdgst": false, 00:42:36.031 "ddgst": false 00:42:36.031 }, 00:42:36.031 "method": "bdev_nvme_attach_controller" 00:42:36.031 }' 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:36.032 12:57:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:36.032 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:36.032 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:36.032 fio-3.35 00:42:36.032 Starting 2 threads 00:42:45.998 00:42:45.998 filename0: (groupid=0, jobs=1): err= 0: pid=878307: Tue Nov 5 12:57:14 2024 00:42:45.998 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:42:45.998 slat (nsec): min=6393, max=28483, avg=9600.69, stdev=2651.42 00:42:45.998 clat (usec): min=40597, max=47000, avg=41002.88, stdev=397.40 00:42:45.998 lat (usec): min=40605, max=47016, avg=41012.48, stdev=397.48 00:42:45.998 clat percentiles (usec): 00:42:45.998 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:45.998 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:45.998 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:45.998 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:42:45.998 | 99.99th=[46924] 00:42:45.998 bw ( KiB/s): min= 384, max= 416, per=40.32%, avg=388.80, stdev=11.72, samples=20 00:42:45.998 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:45.998 lat (msec) : 50=100.00% 00:42:45.998 cpu : usr=94.46%, sys=5.21%, ctx=15, majf=0, minf=122 00:42:45.998 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:45.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.998 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.998 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:45.998 filename1: (groupid=0, jobs=1): err= 0: pid=878308: Tue Nov 5 12:57:14 2024 00:42:45.998 read: IOPS=143, BW=573KiB/s (587kB/s)(5744KiB/10025msec) 00:42:45.998 slat (nsec): min=7192, max=26237, avg=9640.75, stdev=2437.08 00:42:45.998 clat (usec): min=522, max=46972, avg=27894.92, stdev=18984.56 00:42:45.998 lat (usec): min=530, max=46988, avg=27904.56, stdev=18984.60 00:42:45.998 clat percentiles (usec): 00:42:45.998 | 1.00th=[ 537], 5.00th=[ 553], 10.00th=[ 570], 20.00th=[ 619], 00:42:45.998 | 30.00th=[ 676], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:45.999 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:45.999 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:42:45.999 | 99.99th=[46924] 00:42:45.999 bw ( KiB/s): min= 384, max= 1440, per=59.44%, avg=572.80, stdev=303.21, samples=20 00:42:45.999 iops : min= 96, max= 360, avg=143.20, stdev=75.80, samples=20 00:42:45.999 lat (usec) : 750=31.75%, 1000=0.84% 00:42:45.999 lat (msec) : 50=67.41% 00:42:45.999 cpu : usr=94.68%, sys=5.01%, ctx=23, majf=0, minf=188 00:42:45.999 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:45.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.999 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.999 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:45.999 00:42:45.999 Run status group 0 (all jobs): 00:42:45.999 READ: bw=962KiB/s (985kB/s), 390KiB/s-573KiB/s (399kB/s-587kB/s), io=9648KiB (9880kB), run=10012-10025msec 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 00:42:45.999 real 0m11.351s 00:42:45.999 user 0m20.310s 00:42:45.999 sys 0m1.324s 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 ************************************ 00:42:45.999 END TEST fio_dif_1_multi_subsystems 00:42:45.999 ************************************ 00:42:45.999 12:57:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:45.999 12:57:15 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:42:45.999 12:57:15 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 ************************************ 00:42:45.999 START TEST fio_dif_rand_params 00:42:45.999 ************************************ 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 bdev_null0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:45.999 [2024-11-05 12:57:15.202675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:45.999 { 00:42:45.999 "params": { 00:42:45.999 "name": "Nvme$subsystem", 00:42:45.999 "trtype": "$TEST_TRANSPORT", 00:42:45.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:45.999 "adrfam": "ipv4", 00:42:45.999 "trsvcid": "$NVMF_PORT", 00:42:45.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:45.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:45.999 "hdgst": ${hdgst:-false}, 00:42:45.999 "ddgst": ${ddgst:-false} 00:42:45.999 }, 00:42:45.999 "method": "bdev_nvme_attach_controller" 00:42:45.999 } 00:42:45.999 EOF 00:42:45.999 )") 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:45.999 12:57:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:45.999 "params": { 00:42:45.999 "name": "Nvme0", 00:42:45.999 "trtype": "tcp", 00:42:45.999 "traddr": "10.0.0.2", 00:42:45.999 "adrfam": "ipv4", 00:42:45.999 "trsvcid": "4420", 00:42:45.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:45.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:45.999 "hdgst": false, 00:42:45.999 "ddgst": false 00:42:45.999 }, 00:42:45.999 "method": "bdev_nvme_attach_controller" 00:42:46.000 }' 00:42:46.000 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:46.000 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:46.000 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:46.000 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:46.000 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:46.000 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:46.259 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:46.259 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:46.259 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:46.259 12:57:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:46.259 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:46.259 ... 00:42:46.259 fio-3.35 00:42:46.259 Starting 3 threads 00:42:52.823 00:42:52.823 filename0: (groupid=0, jobs=1): err= 0: pid=880206: Tue Nov 5 12:57:21 2024 00:42:52.823 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(145MiB/5047msec) 00:42:52.823 slat (nsec): min=7083, max=57672, avg=15352.09, stdev=4357.46 00:42:52.823 clat (usec): min=4068, max=91954, avg=12962.44, stdev=9249.02 00:42:52.823 lat (usec): min=4081, max=91969, avg=12977.80, stdev=9248.99 00:42:52.823 clat percentiles (usec): 00:42:52.823 | 1.00th=[ 4621], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 8455], 00:42:52.824 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[11994], 60.00th=[12780], 00:42:52.824 | 70.00th=[13566], 80.00th=[14222], 90.00th=[16057], 95.00th=[18744], 00:42:52.824 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[91751], 00:42:52.824 | 99.99th=[91751] 00:42:52.824 bw ( KiB/s): min=23552, max=36864, per=35.98%, avg=29696.00, stdev=4315.88, samples=10 00:42:52.824 iops : min= 184, max= 288, avg=232.00, stdev=33.72, samples=10 00:42:52.824 lat (msec) : 10=38.52%, 20=56.92%, 50=1.03%, 100=3.53% 00:42:52.824 cpu : usr=87.55%, sys=8.94%, ctx=261, majf=0, minf=128 00:42:52.824 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:52.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.824 issued rwts: total=1163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:52.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:52.824 filename0: (groupid=0, jobs=1): err= 0: pid=880207: Tue Nov 5 12:57:21 2024 00:42:52.824 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(127MiB/5046msec) 00:42:52.824 slat (nsec): min=4047, max=35762, avg=13714.10, stdev=3799.40 00:42:52.824 clat (usec): min=4583, max=55056, avg=14794.32, stdev=12152.83 00:42:52.824 lat (usec): min=4610, max=55070, avg=14808.04, stdev=12152.79 00:42:52.824 clat percentiles (usec): 00:42:52.824 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 8160], 20.00th=[ 9110], 00:42:52.824 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11731], 60.00th=[12256], 00:42:52.824 | 70.00th=[12649], 80.00th=[13173], 90.00th=[17171], 95.00th=[51643], 00:42:52.824 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54264], 99.95th=[55313], 00:42:52.824 | 99.99th=[55313] 00:42:52.824 bw ( KiB/s): min=14080, max=34304, per=31.55%, avg=26035.20, stdev=5720.58, samples=10 00:42:52.824 iops : min= 110, max= 268, avg=203.40, stdev=44.69, samples=10 00:42:52.824 lat (msec) : 10=27.87%, 20=62.51%, 50=2.06%, 100=7.56% 00:42:52.824 cpu : usr=93.76%, sys=5.63%, ctx=67, majf=0, minf=62 00:42:52.824 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:52.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.824 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:52.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:52.824 filename0: (groupid=0, jobs=1): err= 0: pid=880208: Tue Nov 5 12:57:21 2024 00:42:52.824 read: IOPS=212, BW=26.6MiB/s (27.8MB/s)(134MiB/5046msec) 00:42:52.824 slat (nsec): min=6972, max=45490, avg=13524.66, stdev=1894.23 00:42:52.824 clat (usec): min=4559, max=92032, avg=14062.48, stdev=10242.36 00:42:52.824 lat (usec): min=4572, max=92045, avg=14076.01, stdev=10242.33 00:42:52.824 clat percentiles (usec): 00:42:52.824 | 1.00th=[ 5276], 5.00th=[ 6915], 10.00th=[ 8029], 20.00th=[ 8586], 00:42:52.824 | 30.00th=[ 9503], 40.00th=[11076], 50.00th=[12518], 60.00th=[13173], 00:42:52.824 | 70.00th=[13829], 80.00th=[15270], 90.00th=[16909], 95.00th=[46924], 00:42:52.824 | 99.00th=[53740], 99.50th=[54789], 99.90th=[89654], 99.95th=[91751], 00:42:52.824 | 99.99th=[91751] 00:42:52.824 bw ( KiB/s): min=20992, max=36608, per=33.19%, avg=27392.00, stdev=4717.32, samples=10 00:42:52.824 iops : min= 164, max= 286, avg=214.00, stdev=36.85, samples=10 00:42:52.824 lat (msec) : 10=33.49%, 20=60.91%, 50=1.21%, 100=4.38% 00:42:52.824 cpu : usr=93.34%, sys=6.16%, ctx=9, majf=0, minf=68 00:42:52.824 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:52.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:52.824 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:52.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:52.824 00:42:52.824 Run status group 0 (all jobs): 00:42:52.824 READ: bw=80.6MiB/s (84.5MB/s), 25.2MiB/s-28.8MiB/s (26.5MB/s-30.2MB/s), io=407MiB (427MB), run=5046-5047msec 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 bdev_null0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 [2024-11-05 12:57:21.413951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 bdev_null1 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 bdev_null2 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.824 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:52.825 { 00:42:52.825 "params": { 00:42:52.825 "name": "Nvme$subsystem", 00:42:52.825 "trtype": "$TEST_TRANSPORT", 00:42:52.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:52.825 "adrfam": "ipv4", 00:42:52.825 "trsvcid": "$NVMF_PORT", 00:42:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:52.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:52.825 "hdgst": ${hdgst:-false}, 00:42:52.825 "ddgst": ${ddgst:-false} 00:42:52.825 }, 00:42:52.825 "method": "bdev_nvme_attach_controller" 00:42:52.825 } 00:42:52.825 EOF 00:42:52.825 )") 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:52.825 { 00:42:52.825 "params": { 00:42:52.825 "name": "Nvme$subsystem", 00:42:52.825 "trtype": "$TEST_TRANSPORT", 00:42:52.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:52.825 "adrfam": "ipv4", 00:42:52.825 "trsvcid": "$NVMF_PORT", 00:42:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:52.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:52.825 "hdgst": ${hdgst:-false}, 00:42:52.825 "ddgst": ${ddgst:-false} 00:42:52.825 }, 00:42:52.825 "method": "bdev_nvme_attach_controller" 00:42:52.825 } 00:42:52.825 EOF 00:42:52.825 )") 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:52.825 { 00:42:52.825 "params": { 00:42:52.825 "name": "Nvme$subsystem", 00:42:52.825 "trtype": "$TEST_TRANSPORT", 00:42:52.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:52.825 "adrfam": "ipv4", 00:42:52.825 "trsvcid": "$NVMF_PORT", 00:42:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:52.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:52.825 "hdgst": ${hdgst:-false}, 00:42:52.825 "ddgst": ${ddgst:-false} 00:42:52.825 }, 00:42:52.825 "method": "bdev_nvme_attach_controller" 00:42:52.825 } 00:42:52.825 EOF 00:42:52.825 )") 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:52.825 "params": { 00:42:52.825 "name": "Nvme0", 00:42:52.825 "trtype": "tcp", 00:42:52.825 "traddr": "10.0.0.2", 00:42:52.825 "adrfam": "ipv4", 00:42:52.825 "trsvcid": "4420", 00:42:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:52.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:52.825 "hdgst": false, 00:42:52.825 "ddgst": false 00:42:52.825 }, 00:42:52.825 "method": "bdev_nvme_attach_controller" 00:42:52.825 },{ 00:42:52.825 "params": { 00:42:52.825 "name": "Nvme1", 00:42:52.825 "trtype": "tcp", 00:42:52.825 "traddr": "10.0.0.2", 00:42:52.825 "adrfam": "ipv4", 00:42:52.825 "trsvcid": "4420", 00:42:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:52.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:52.825 "hdgst": false, 00:42:52.825 "ddgst": false 00:42:52.825 }, 00:42:52.825 "method": "bdev_nvme_attach_controller" 00:42:52.825 },{ 00:42:52.825 "params": { 00:42:52.825 "name": "Nvme2", 00:42:52.825 "trtype": "tcp", 00:42:52.825 "traddr": "10.0.0.2", 00:42:52.825 "adrfam": "ipv4", 00:42:52.825 "trsvcid": "4420", 00:42:52.825 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:52.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:52.825 "hdgst": false, 00:42:52.825 "ddgst": false 00:42:52.825 }, 00:42:52.825 "method": "bdev_nvme_attach_controller" 00:42:52.825 }' 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:52.825 12:57:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:52.825 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:52.825 ... 00:42:52.825 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:52.825 ... 00:42:52.825 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:52.825 ... 00:42:52.825 fio-3.35 00:42:52.825 Starting 24 threads 00:43:05.030 00:43:05.030 filename0: (groupid=0, jobs=1): err= 0: pid=881067: Tue Nov 5 12:57:32 2024 00:43:05.030 read: IOPS=470, BW=1880KiB/s (1926kB/s)(18.4MiB/10006msec) 00:43:05.030 slat (nsec): min=8422, max=71699, avg=33563.95, stdev=10551.06 00:43:05.030 clat (usec): min=20989, max=65531, avg=33742.58, stdev=2302.00 00:43:05.030 lat (usec): min=21018, max=65558, avg=33776.14, stdev=2300.94 00:43:05.030 clat percentiles (usec): 00:43:05.030 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:43:05.030 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.030 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.030 | 99.00th=[41681], 99.50th=[43779], 99.90th=[65274], 99.95th=[65274], 00:43:05.030 | 99.99th=[65274] 00:43:05.030 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1872.84, stdev=76.45, samples=19 00:43:05.030 iops : min= 416, max= 480, avg=468.21, stdev=19.11, samples=19 00:43:05.030 lat (msec) : 50=99.66%, 100=0.34% 00:43:05.030 cpu : usr=97.12%, sys=1.81%, ctx=153, majf=0, minf=19 00:43:05.030 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.030 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.030 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.030 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881068: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10019msec) 00:43:05.031 slat (nsec): min=9005, max=86082, avg=39225.18, stdev=12410.61 00:43:05.031 clat (usec): min=15543, max=43861, avg=33501.87, stdev=1716.86 00:43:05.031 lat (usec): min=15577, max=43880, avg=33541.10, stdev=1717.14 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[26870], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.031 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.031 | 99.00th=[38011], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.031 | 99.99th=[43779] 00:43:05.031 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.031 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.031 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.031 cpu : usr=98.47%, sys=1.11%, ctx=15, majf=0, minf=16 00:43:05.031 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881069: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10019msec) 00:43:05.031 slat (usec): min=11, max=146, avg=39.20, stdev=14.03 00:43:05.031 clat (usec): min=15578, max=43971, avg=33505.43, stdev=1723.24 00:43:05.031 lat (usec): min=15613, max=43993, avg=33544.63, stdev=1723.40 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[26608], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.031 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.031 | 99.00th=[38011], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.031 | 99.99th=[43779] 00:43:05.031 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.031 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.031 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.031 cpu : usr=98.06%, sys=1.39%, ctx=45, majf=0, minf=25 00:43:05.031 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881070: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.5MiB/10007msec) 00:43:05.031 slat (usec): min=8, max=109, avg=39.57, stdev=15.18 00:43:05.031 clat (usec): min=14490, max=91174, avg=33410.07, stdev=3852.30 00:43:05.031 lat (usec): min=14530, max=91206, avg=33449.65, stdev=3853.26 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[21627], 5.00th=[28705], 10.00th=[32637], 20.00th=[32900], 00:43:05.031 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.031 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[36963], 00:43:05.031 | 99.00th=[43779], 99.50th=[52167], 99.90th=[72877], 99.95th=[72877], 00:43:05.031 | 99.99th=[90702] 00:43:05.031 bw ( KiB/s): min= 1792, max= 2016, per=4.18%, avg=1891.20, stdev=62.85, samples=20 00:43:05.031 iops : min= 448, max= 504, avg=472.80, stdev=15.71, samples=20 00:43:05.031 lat (msec) : 20=0.63%, 50=98.86%, 100=0.51% 00:43:05.031 cpu : usr=97.01%, sys=2.00%, ctx=129, majf=0, minf=29 00:43:05.031 IO depths : 1=5.1%, 2=10.3%, 4=21.3%, 8=55.3%, 16=7.9%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881071: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=470, BW=1881KiB/s (1927kB/s)(18.4MiB/10001msec) 00:43:05.031 slat (usec): min=8, max=109, avg=38.06, stdev=18.34 00:43:05.031 clat (usec): min=17324, max=70986, avg=33669.76, stdev=1860.93 00:43:05.031 lat (usec): min=17336, max=71023, avg=33707.82, stdev=1859.89 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.031 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.031 | 99.00th=[41157], 99.50th=[43779], 99.90th=[51119], 99.95th=[51643], 00:43:05.031 | 99.99th=[70779] 00:43:05.031 bw ( KiB/s): min= 1664, max= 1936, per=4.15%, avg=1879.58, stdev=74.74, samples=19 00:43:05.031 iops : min= 416, max= 484, avg=469.89, stdev=18.68, samples=19 00:43:05.031 lat (msec) : 20=0.13%, 50=99.49%, 100=0.38% 00:43:05.031 cpu : usr=98.08%, sys=1.52%, ctx=14, majf=0, minf=16 00:43:05.031 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881072: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:43:05.031 slat (nsec): min=8135, max=99837, avg=26446.34, stdev=15652.09 00:43:05.031 clat (usec): min=21057, max=44393, avg=33753.21, stdev=1480.77 00:43:05.031 lat (usec): min=21086, max=44418, avg=33779.65, stdev=1480.84 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:43:05.031 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.031 | 99.00th=[41681], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:43:05.031 | 99.99th=[44303] 00:43:05.031 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:43:05.031 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:43:05.031 lat (msec) : 50=100.00% 00:43:05.031 cpu : usr=98.34%, sys=1.26%, ctx=23, majf=0, minf=17 00:43:05.031 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881073: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=470, BW=1881KiB/s (1927kB/s)(18.4MiB/10001msec) 00:43:05.031 slat (nsec): min=8315, max=75907, avg=29933.94, stdev=10918.48 00:43:05.031 clat (usec): min=17238, max=51344, avg=33743.37, stdev=1724.94 00:43:05.031 lat (usec): min=17249, max=51366, avg=33773.30, stdev=1725.58 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:43:05.031 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.031 | 99.00th=[41157], 99.50th=[43779], 99.90th=[51119], 99.95th=[51119], 00:43:05.031 | 99.99th=[51119] 00:43:05.031 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.58, stdev=74.55, samples=19 00:43:05.031 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:43:05.031 lat (msec) : 20=0.13%, 50=99.53%, 100=0.34% 00:43:05.031 cpu : usr=96.79%, sys=1.92%, ctx=224, majf=0, minf=33 00:43:05.031 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename0: (groupid=0, jobs=1): err= 0: pid=881074: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10020msec) 00:43:05.031 slat (nsec): min=7736, max=93976, avg=26791.42, stdev=15599.12 00:43:05.031 clat (usec): min=14566, max=44055, avg=33635.46, stdev=1706.41 00:43:05.031 lat (usec): min=14617, max=44077, avg=33662.25, stdev=1706.53 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[28181], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:43:05.031 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.031 | 99.00th=[38536], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.031 | 99.99th=[44303] 00:43:05.031 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.031 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.031 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.031 cpu : usr=96.75%, sys=2.04%, ctx=189, majf=0, minf=24 00:43:05.031 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:05.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.031 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.031 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.031 filename1: (groupid=0, jobs=1): err= 0: pid=881075: Tue Nov 5 12:57:32 2024 00:43:05.031 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10019msec) 00:43:05.031 slat (usec): min=6, max=107, avg=38.98, stdev=19.46 00:43:05.031 clat (usec): min=14696, max=43912, avg=33520.94, stdev=1854.07 00:43:05.031 lat (usec): min=14743, max=43941, avg=33559.92, stdev=1852.16 00:43:05.031 clat percentiles (usec): 00:43:05.031 | 1.00th=[26608], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.031 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.031 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.031 | 99.00th=[38536], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.031 | 99.99th=[43779] 00:43:05.031 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.031 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.032 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.032 cpu : usr=98.20%, sys=1.38%, ctx=13, majf=0, minf=26 00:43:05.032 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881076: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:43:05.032 slat (usec): min=12, max=116, avg=68.18, stdev=20.55 00:43:05.032 clat (usec): min=21195, max=44313, avg=33353.55, stdev=1548.73 00:43:05.032 lat (usec): min=21228, max=44340, avg=33421.73, stdev=1547.32 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:43:05.032 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:43:05.032 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.032 | 99.00th=[40633], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:43:05.032 | 99.99th=[44303] 00:43:05.032 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:43:05.032 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:43:05.032 lat (msec) : 50=100.00% 00:43:05.032 cpu : usr=98.42%, sys=1.13%, ctx=15, majf=0, minf=19 00:43:05.032 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881077: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10016msec) 00:43:05.032 slat (usec): min=5, max=113, avg=41.58, stdev=12.94 00:43:05.032 clat (usec): min=15597, max=43864, avg=33467.39, stdev=1766.69 00:43:05.032 lat (usec): min=15631, max=43894, avg=33508.97, stdev=1767.31 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[24511], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.032 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.032 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.032 | 99.00th=[38011], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.032 | 99.99th=[43779] 00:43:05.032 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1886.63, stdev=58.11, samples=19 00:43:05.032 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:43:05.032 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.032 cpu : usr=97.01%, sys=1.84%, ctx=172, majf=0, minf=27 00:43:05.032 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881078: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10019msec) 00:43:05.032 slat (nsec): min=9871, max=88291, avg=39297.18, stdev=12459.28 00:43:05.032 clat (usec): min=15530, max=43897, avg=33503.23, stdev=1718.14 00:43:05.032 lat (usec): min=15563, max=43919, avg=33542.53, stdev=1718.35 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[26608], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.032 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.032 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.032 | 99.00th=[38011], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.032 | 99.99th=[43779] 00:43:05.032 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.032 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.032 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.032 cpu : usr=96.67%, sys=2.06%, ctx=136, majf=0, minf=26 00:43:05.032 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881079: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=470, BW=1880KiB/s (1926kB/s)(18.4MiB/10006msec) 00:43:05.032 slat (usec): min=11, max=110, avg=41.87, stdev=19.14 00:43:05.032 clat (usec): min=21004, max=66204, avg=33672.97, stdev=2345.01 00:43:05.032 lat (usec): min=21042, max=66234, avg=33714.85, stdev=2342.83 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.032 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.032 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.032 | 99.00th=[41157], 99.50th=[43779], 99.90th=[66323], 99.95th=[66323], 00:43:05.032 | 99.99th=[66323] 00:43:05.032 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1875.35, stdev=74.71, samples=20 00:43:05.032 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:43:05.032 lat (msec) : 50=99.66%, 100=0.34% 00:43:05.032 cpu : usr=98.27%, sys=1.32%, ctx=13, majf=0, minf=20 00:43:05.032 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881080: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10020msec) 00:43:05.032 slat (usec): min=9, max=131, avg=42.01, stdev=17.70 00:43:05.032 clat (usec): min=14541, max=43942, avg=33472.51, stdev=1730.24 00:43:05.032 lat (usec): min=14583, max=43964, avg=33514.52, stdev=1729.59 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[28181], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:43:05.032 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.032 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.032 | 99.00th=[38011], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.032 | 99.99th=[43779] 00:43:05.032 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.032 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.032 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.032 cpu : usr=97.15%, sys=1.87%, ctx=119, majf=0, minf=21 00:43:05.032 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881081: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=470, BW=1880KiB/s (1926kB/s)(18.4MiB/10006msec) 00:43:05.032 slat (nsec): min=9319, max=98124, avg=36864.31, stdev=12739.53 00:43:05.032 clat (usec): min=21016, max=65622, avg=33703.49, stdev=2304.56 00:43:05.032 lat (usec): min=21043, max=65654, avg=33740.36, stdev=2304.07 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:43:05.032 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.032 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.032 | 99.00th=[41157], 99.50th=[43779], 99.90th=[65799], 99.95th=[65799], 00:43:05.032 | 99.99th=[65799] 00:43:05.032 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1872.84, stdev=76.45, samples=19 00:43:05.032 iops : min= 416, max= 480, avg=468.21, stdev=19.11, samples=19 00:43:05.032 lat (msec) : 50=99.66%, 100=0.34% 00:43:05.032 cpu : usr=97.56%, sys=1.55%, ctx=384, majf=0, minf=18 00:43:05.032 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename1: (groupid=0, jobs=1): err= 0: pid=881082: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=470, BW=1881KiB/s (1927kB/s)(18.4MiB/10001msec) 00:43:05.032 slat (usec): min=8, max=155, avg=36.49, stdev=16.30 00:43:05.032 clat (usec): min=17586, max=66651, avg=33705.82, stdev=2001.39 00:43:05.032 lat (usec): min=17596, max=66680, avg=33742.31, stdev=2000.68 00:43:05.032 clat percentiles (usec): 00:43:05.032 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.032 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.032 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.032 | 99.00th=[41157], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:43:05.032 | 99.99th=[66847] 00:43:05.032 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1879.58, stdev=73.20, samples=19 00:43:05.032 iops : min= 416, max= 480, avg=469.89, stdev=18.30, samples=19 00:43:05.032 lat (msec) : 20=0.30%, 50=99.36%, 100=0.34% 00:43:05.032 cpu : usr=97.88%, sys=1.59%, ctx=29, majf=0, minf=34 00:43:05.032 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:43:05.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.032 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.032 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.032 filename2: (groupid=0, jobs=1): err= 0: pid=881083: Tue Nov 5 12:57:32 2024 00:43:05.032 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10014msec) 00:43:05.032 slat (nsec): min=6981, max=98631, avg=15224.96, stdev=12766.91 00:43:05.032 clat (usec): min=17916, max=43883, avg=33690.11, stdev=1811.70 00:43:05.032 lat (usec): min=17926, max=43899, avg=33705.33, stdev=1812.14 00:43:05.032 clat percentiles (usec): 00:43:05.033 | 1.00th=[22152], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:43:05.033 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:43:05.033 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:43:05.033 | 99.00th=[40109], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:43:05.033 | 99.99th=[43779] 00:43:05.033 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.033 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.033 lat (msec) : 20=0.68%, 50=99.32% 00:43:05.033 cpu : usr=98.48%, sys=1.11%, ctx=12, majf=0, minf=20 00:43:05.033 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881084: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=470, BW=1880KiB/s (1925kB/s)(18.4MiB/10007msec) 00:43:05.033 slat (usec): min=8, max=153, avg=38.33, stdev=13.49 00:43:05.033 clat (usec): min=17443, max=72873, avg=33684.21, stdev=2742.59 00:43:05.033 lat (usec): min=17464, max=72908, avg=33722.54, stdev=2742.49 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.033 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.033 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.033 | 99.00th=[43254], 99.50th=[43779], 99.90th=[72877], 99.95th=[72877], 00:43:05.033 | 99.99th=[72877] 00:43:05.033 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1875.20, stdev=75.15, samples=20 00:43:05.033 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:43:05.033 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:43:05.033 cpu : usr=98.10%, sys=1.42%, ctx=31, majf=0, minf=21 00:43:05.033 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881085: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:43:05.033 slat (usec): min=6, max=146, avg=26.47, stdev=15.01 00:43:05.033 clat (usec): min=19157, max=43911, avg=33625.31, stdev=1686.38 00:43:05.033 lat (usec): min=19183, max=43932, avg=33651.78, stdev=1683.61 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[27395], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:43:05.033 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:43:05.033 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.033 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:43:05.033 | 99.99th=[43779] 00:43:05.033 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1888.00, stdev=56.87, samples=20 00:43:05.033 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.033 lat (msec) : 20=0.51%, 50=99.49% 00:43:05.033 cpu : usr=96.28%, sys=2.08%, ctx=285, majf=0, minf=28 00:43:05.033 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881086: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10019msec) 00:43:05.033 slat (usec): min=7, max=137, avg=59.76, stdev=23.64 00:43:05.033 clat (usec): min=14688, max=45120, avg=33322.77, stdev=1850.51 00:43:05.033 lat (usec): min=14731, max=45175, avg=33382.53, stdev=1851.53 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[26608], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:43:05.033 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.033 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.033 | 99.00th=[38011], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:43:05.033 | 99.99th=[45351] 00:43:05.033 bw ( KiB/s): min= 1792, max= 1923, per=4.17%, avg=1888.15, stdev=56.96, samples=20 00:43:05.033 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:43:05.033 lat (msec) : 20=0.34%, 50=99.66% 00:43:05.033 cpu : usr=98.22%, sys=1.34%, ctx=12, majf=0, minf=26 00:43:05.033 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881087: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:43:05.033 slat (usec): min=8, max=104, avg=28.14, stdev=22.90 00:43:05.033 clat (usec): min=21254, max=44359, avg=33720.13, stdev=1506.54 00:43:05.033 lat (usec): min=21278, max=44394, avg=33748.27, stdev=1503.68 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:43:05.033 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:43:05.033 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.033 | 99.00th=[41681], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:43:05.033 | 99.99th=[44303] 00:43:05.033 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:43:05.033 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:43:05.033 lat (msec) : 50=100.00% 00:43:05.033 cpu : usr=98.33%, sys=1.26%, ctx=12, majf=0, minf=31 00:43:05.033 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881088: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=470, BW=1880KiB/s (1925kB/s)(18.4MiB/10007msec) 00:43:05.033 slat (nsec): min=8393, max=91879, avg=37528.70, stdev=13071.42 00:43:05.033 clat (usec): min=17127, max=72816, avg=33692.83, stdev=2760.69 00:43:05.033 lat (usec): min=17142, max=72848, avg=33730.36, stdev=2760.31 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:43:05.033 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.033 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.033 | 99.00th=[43254], 99.50th=[43779], 99.90th=[72877], 99.95th=[72877], 00:43:05.033 | 99.99th=[72877] 00:43:05.033 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1875.20, stdev=75.15, samples=20 00:43:05.033 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:43:05.033 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:43:05.033 cpu : usr=98.51%, sys=1.09%, ctx=12, majf=0, minf=22 00:43:05.033 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881089: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=470, BW=1880KiB/s (1926kB/s)(18.4MiB/10006msec) 00:43:05.033 slat (nsec): min=8569, max=96733, avg=36446.50, stdev=12968.80 00:43:05.033 clat (usec): min=19695, max=65963, avg=33707.91, stdev=2337.10 00:43:05.033 lat (usec): min=19764, max=65991, avg=33744.35, stdev=2336.12 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:43:05.033 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:43:05.033 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[35390], 00:43:05.033 | 99.00th=[41157], 99.50th=[43779], 99.90th=[65799], 99.95th=[65799], 00:43:05.033 | 99.99th=[65799] 00:43:05.033 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1875.35, stdev=74.71, samples=20 00:43:05.033 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:43:05.033 lat (msec) : 20=0.11%, 50=99.55%, 100=0.34% 00:43:05.033 cpu : usr=98.25%, sys=1.33%, ctx=13, majf=0, minf=17 00:43:05.033 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.033 filename2: (groupid=0, jobs=1): err= 0: pid=881090: Tue Nov 5 12:57:32 2024 00:43:05.033 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10009msec) 00:43:05.033 slat (usec): min=8, max=113, avg=47.47, stdev=20.46 00:43:05.033 clat (usec): min=17252, max=52222, avg=33523.96, stdev=2503.75 00:43:05.033 lat (usec): min=17292, max=52235, avg=33571.43, stdev=2503.05 00:43:05.033 clat percentiles (usec): 00:43:05.033 | 1.00th=[24773], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:43:05.033 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:43:05.033 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34866], 95.00th=[35914], 00:43:05.033 | 99.00th=[43779], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:43:05.033 | 99.99th=[52167] 00:43:05.033 bw ( KiB/s): min= 1664, max= 1968, per=4.16%, avg=1880.00, stdev=74.60, samples=20 00:43:05.033 iops : min= 416, max= 492, avg=470.00, stdev=18.65, samples=20 00:43:05.033 lat (msec) : 20=0.68%, 50=98.77%, 100=0.55% 00:43:05.033 cpu : usr=97.07%, sys=1.91%, ctx=115, majf=0, minf=28 00:43:05.033 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 00:43:05.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.033 issued rwts: total=4716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.033 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:05.034 00:43:05.034 Run status group 0 (all jobs): 00:43:05.034 READ: bw=44.2MiB/s (46.3MB/s), 1880KiB/s-1896KiB/s (1925kB/s-1942kB/s), io=443MiB (464MB), run=10001-10020msec 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 bdev_null0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 [2024-11-05 12:57:33.169569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 bdev_null1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:05.034 { 00:43:05.034 "params": { 00:43:05.034 "name": "Nvme$subsystem", 00:43:05.034 "trtype": "$TEST_TRANSPORT", 00:43:05.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.034 "adrfam": "ipv4", 00:43:05.034 "trsvcid": "$NVMF_PORT", 00:43:05.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.034 "hdgst": ${hdgst:-false}, 00:43:05.034 "ddgst": ${ddgst:-false} 00:43:05.034 }, 00:43:05.034 "method": "bdev_nvme_attach_controller" 00:43:05.034 } 00:43:05.034 EOF 00:43:05.034 )") 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:05.034 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:05.035 { 00:43:05.035 "params": { 00:43:05.035 "name": "Nvme$subsystem", 00:43:05.035 "trtype": "$TEST_TRANSPORT", 00:43:05.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.035 "adrfam": "ipv4", 00:43:05.035 "trsvcid": "$NVMF_PORT", 00:43:05.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.035 "hdgst": ${hdgst:-false}, 00:43:05.035 "ddgst": ${ddgst:-false} 00:43:05.035 }, 00:43:05.035 "method": "bdev_nvme_attach_controller" 00:43:05.035 } 00:43:05.035 EOF 00:43:05.035 )") 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:05.035 "params": { 00:43:05.035 "name": "Nvme0", 00:43:05.035 "trtype": "tcp", 00:43:05.035 "traddr": "10.0.0.2", 00:43:05.035 "adrfam": "ipv4", 00:43:05.035 "trsvcid": "4420", 00:43:05.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.035 "hdgst": false, 00:43:05.035 "ddgst": false 00:43:05.035 }, 00:43:05.035 "method": "bdev_nvme_attach_controller" 00:43:05.035 },{ 00:43:05.035 "params": { 00:43:05.035 "name": "Nvme1", 00:43:05.035 "trtype": "tcp", 00:43:05.035 "traddr": "10.0.0.2", 00:43:05.035 "adrfam": "ipv4", 00:43:05.035 "trsvcid": "4420", 00:43:05.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:05.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:05.035 "hdgst": false, 00:43:05.035 "ddgst": false 00:43:05.035 }, 00:43:05.035 "method": "bdev_nvme_attach_controller" 00:43:05.035 }' 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:05.035 12:57:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.035 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:05.035 ... 00:43:05.035 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:05.035 ... 00:43:05.035 fio-3.35 00:43:05.035 Starting 4 threads 00:43:10.298 00:43:10.298 filename0: (groupid=0, jobs=1): err= 0: pid=882467: Tue Nov 5 12:57:39 2024 00:43:10.298 read: IOPS=1848, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:43:10.298 slat (nsec): min=3900, max=88812, avg=17507.96, stdev=10302.30 00:43:10.298 clat (usec): min=1000, max=7621, avg=4270.27, stdev=573.12 00:43:10.298 lat (usec): min=1013, max=7641, avg=4287.78, stdev=573.33 00:43:10.298 clat percentiles (usec): 00:43:10.298 | 1.00th=[ 2474], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 4015], 00:43:10.298 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:43:10.298 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5211], 00:43:10.298 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7308], 99.95th=[ 7439], 00:43:10.298 | 99.99th=[ 7635] 00:43:10.298 bw ( KiB/s): min=14272, max=15152, per=24.98%, avg=14732.44, stdev=286.08, samples=9 00:43:10.298 iops : min= 1784, max= 1894, avg=1841.56, stdev=35.76, samples=9 00:43:10.298 lat (msec) : 2=0.45%, 4=19.60%, 10=79.95% 00:43:10.298 cpu : usr=94.90%, sys=4.60%, ctx=10, majf=0, minf=105 00:43:10.298 IO depths : 1=0.4%, 2=12.7%, 4=59.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:10.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.298 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.298 issued rwts: total=9247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.298 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:10.298 filename0: (groupid=0, jobs=1): err= 0: pid=882468: Tue Nov 5 12:57:39 2024 00:43:10.298 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5003msec) 00:43:10.298 slat (usec): min=4, max=109, avg=21.25, stdev=11.35 00:43:10.298 clat (usec): min=652, max=8595, avg=4238.80, stdev=604.03 00:43:10.298 lat (usec): min=666, max=8615, avg=4260.05, stdev=604.61 00:43:10.299 clat percentiles (usec): 00:43:10.299 | 1.00th=[ 2180], 5.00th=[ 3458], 10.00th=[ 3752], 20.00th=[ 3982], 00:43:10.299 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4293], 00:43:10.299 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5276], 00:43:10.299 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7570], 99.95th=[ 7701], 00:43:10.299 | 99.99th=[ 8586] 00:43:10.299 bw ( KiB/s): min=14592, max=15024, per=25.08%, avg=14794.67, stdev=140.17, samples=9 00:43:10.299 iops : min= 1824, max= 1878, avg=1849.33, stdev=17.52, samples=9 00:43:10.299 lat (usec) : 750=0.02%, 1000=0.04% 00:43:10.299 lat (msec) : 2=0.79%, 4=20.65%, 10=78.50% 00:43:10.299 cpu : usr=91.18%, sys=5.54%, ctx=221, majf=0, minf=29 00:43:10.299 IO depths : 1=0.4%, 2=18.8%, 4=54.6%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:10.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.299 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.299 issued rwts: total=9264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.299 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:10.299 filename1: (groupid=0, jobs=1): err= 0: pid=882469: Tue Nov 5 12:57:39 2024 00:43:10.299 read: IOPS=1866, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5004msec) 00:43:10.299 slat (nsec): min=4274, max=89257, avg=17519.41, stdev=10425.20 00:43:10.299 clat (usec): min=701, max=7543, avg=4224.82, stdev=575.54 00:43:10.299 lat (usec): min=713, max=7564, avg=4242.34, stdev=576.22 00:43:10.299 clat percentiles (usec): 00:43:10.299 | 1.00th=[ 2245], 5.00th=[ 3425], 10.00th=[ 3720], 20.00th=[ 3949], 00:43:10.299 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:43:10.299 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 5145], 00:43:10.299 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7373], 00:43:10.299 | 99.99th=[ 7570] 00:43:10.299 bw ( KiB/s): min=14656, max=15328, per=25.32%, avg=14931.20, stdev=236.45, samples=10 00:43:10.299 iops : min= 1832, max= 1916, avg=1866.40, stdev=29.56, samples=10 00:43:10.299 lat (usec) : 750=0.01%, 1000=0.02% 00:43:10.299 lat (msec) : 2=0.64%, 4=22.28%, 10=77.04% 00:43:10.299 cpu : usr=95.28%, sys=4.24%, ctx=8, majf=0, minf=49 00:43:10.299 IO depths : 1=0.6%, 2=16.0%, 4=56.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:10.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.299 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.299 issued rwts: total=9340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.299 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:10.299 filename1: (groupid=0, jobs=1): err= 0: pid=882470: Tue Nov 5 12:57:39 2024 00:43:10.299 read: IOPS=1806, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5003msec) 00:43:10.299 slat (nsec): min=3849, max=90966, avg=20409.11, stdev=10971.82 00:43:10.299 clat (usec): min=655, max=9516, avg=4351.47, stdev=670.84 00:43:10.299 lat (usec): min=673, max=9532, avg=4371.88, stdev=670.59 00:43:10.299 clat percentiles (usec): 00:43:10.299 | 1.00th=[ 2245], 5.00th=[ 3621], 10.00th=[ 3851], 20.00th=[ 4080], 00:43:10.299 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:43:10.299 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5604], 00:43:10.299 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[ 7898], 00:43:10.299 | 99.99th=[ 9503] 00:43:10.299 bw ( KiB/s): min=14060, max=14784, per=24.51%, avg=14454.00, stdev=247.10, samples=10 00:43:10.299 iops : min= 1757, max= 1848, avg=1806.70, stdev=30.98, samples=10 00:43:10.299 lat (usec) : 750=0.02%, 1000=0.07% 00:43:10.299 lat (msec) : 2=0.71%, 4=14.13%, 10=85.08% 00:43:10.299 cpu : usr=95.72%, sys=3.56%, ctx=111, majf=0, minf=31 00:43:10.299 IO depths : 1=0.4%, 2=16.5%, 4=56.4%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:10.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.299 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.299 issued rwts: total=9040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.299 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:10.299 00:43:10.299 Run status group 0 (all jobs): 00:43:10.299 READ: bw=57.6MiB/s (60.4MB/s), 14.1MiB/s-14.6MiB/s (14.8MB/s-15.3MB/s), io=288MiB (302MB), run=5002-5004msec 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:10.299 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:10.558 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.558 00:43:10.558 real 0m24.372s 00:43:10.558 user 4m32.424s 00:43:10.558 sys 0m6.630s 00:43:10.558 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 ************************************ 00:43:10.558 END TEST fio_dif_rand_params 00:43:10.558 ************************************ 00:43:10.558 12:57:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:10.558 12:57:39 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:10.558 12:57:39 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 ************************************ 00:43:10.558 START TEST fio_dif_digest 00:43:10.558 ************************************ 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 bdev_null0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:10.558 [2024-11-05 12:57:39.618929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:10.558 { 00:43:10.558 "params": { 00:43:10.558 "name": "Nvme$subsystem", 00:43:10.558 "trtype": "$TEST_TRANSPORT", 00:43:10.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:10.558 "adrfam": "ipv4", 00:43:10.558 "trsvcid": "$NVMF_PORT", 00:43:10.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:10.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:10.558 "hdgst": ${hdgst:-false}, 00:43:10.558 "ddgst": ${ddgst:-false} 00:43:10.558 }, 00:43:10.558 "method": "bdev_nvme_attach_controller" 00:43:10.558 } 00:43:10.558 EOF 00:43:10.558 )") 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:10.558 "params": { 00:43:10.558 "name": "Nvme0", 00:43:10.558 "trtype": "tcp", 00:43:10.558 "traddr": "10.0.0.2", 00:43:10.558 "adrfam": "ipv4", 00:43:10.558 "trsvcid": "4420", 00:43:10.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:10.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:10.558 "hdgst": true, 00:43:10.558 "ddgst": true 00:43:10.558 }, 00:43:10.558 "method": "bdev_nvme_attach_controller" 00:43:10.558 }' 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:10.558 12:57:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:10.816 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:10.816 ... 00:43:10.816 fio-3.35 00:43:10.816 Starting 3 threads 00:43:23.059 00:43:23.059 filename0: (groupid=0, jobs=1): err= 0: pid=883217: Tue Nov 5 12:57:50 2024 00:43:23.059 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(262MiB/10008msec) 00:43:23.059 slat (nsec): min=4244, max=71454, avg=18882.18, stdev=5085.98 00:43:23.059 clat (usec): min=9037, max=56526, avg=14327.30, stdev=4714.11 00:43:23.059 lat (usec): min=9056, max=56541, avg=14346.19, stdev=4713.97 00:43:23.059 clat percentiles (usec): 00:43:23.059 | 1.00th=[11207], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:43:23.059 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:43:23.059 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:43:23.059 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:43:23.059 | 99.99th=[56361] 00:43:23.059 bw ( KiB/s): min=23296, max=28928, per=33.87%, avg=26739.20, stdev=1403.34, samples=20 00:43:23.059 iops : min= 182, max= 226, avg=208.90, stdev=10.96, samples=20 00:43:23.059 lat (msec) : 10=0.24%, 20=98.47%, 100=1.29% 00:43:23.059 cpu : usr=93.49%, sys=5.02%, ctx=257, majf=0, minf=177 00:43:23.059 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:23.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.059 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:23.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:23.060 filename0: (groupid=0, jobs=1): err= 0: pid=883218: Tue Nov 5 12:57:50 2024 00:43:23.060 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(261MiB/10044msec) 00:43:23.060 slat (nsec): min=7297, max=81706, avg=17825.21, stdev=4106.58 00:43:23.060 clat (usec): min=7978, max=48266, avg=14417.00, stdev=1830.82 00:43:23.060 lat (usec): min=7996, max=48284, avg=14434.83, stdev=1830.89 00:43:23.060 clat percentiles (usec): 00:43:23.060 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[12911], 20.00th=[13698], 00:43:23.060 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:43:23.060 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16450], 00:43:23.060 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[46400], 00:43:23.060 | 99.99th=[48497] 00:43:23.060 bw ( KiB/s): min=25600, max=28672, per=33.77%, avg=26655.00, stdev=830.38, samples=20 00:43:23.060 iops : min= 200, max= 224, avg=208.20, stdev= 6.45, samples=20 00:43:23.060 lat (msec) : 10=3.69%, 20=96.21%, 50=0.10% 00:43:23.060 cpu : usr=95.43%, sys=4.11%, ctx=18, majf=0, minf=153 00:43:23.060 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:23.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.060 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:23.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:23.060 filename0: (groupid=0, jobs=1): err= 0: pid=883219: Tue Nov 5 12:57:50 2024 00:43:23.060 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10043msec) 00:43:23.060 slat (nsec): min=4518, max=47417, avg=16939.83, stdev=3478.31 00:43:23.060 clat (usec): min=8356, max=55513, avg=14889.21, stdev=2411.96 00:43:23.060 lat (usec): min=8370, max=55532, avg=14906.15, stdev=2411.77 00:43:23.060 clat percentiles (usec): 00:43:23.060 | 1.00th=[ 9503], 5.00th=[11731], 10.00th=[13435], 20.00th=[14091], 00:43:23.060 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:43:23.060 | 70.00th=[15401], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:43:23.060 | 99.00th=[18220], 99.50th=[18744], 99.90th=[54789], 99.95th=[54789], 00:43:23.060 | 99.99th=[55313] 00:43:23.060 bw ( KiB/s): min=24320, max=27648, per=32.70%, avg=25810.00, stdev=855.63, samples=20 00:43:23.060 iops : min= 190, max= 216, avg=201.60, stdev= 6.67, samples=20 00:43:23.060 lat (msec) : 10=2.28%, 20=97.47%, 50=0.05%, 100=0.20% 00:43:23.060 cpu : usr=95.92%, sys=3.58%, ctx=18, majf=0, minf=177 00:43:23.060 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:23.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:23.060 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:23.060 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:23.060 00:43:23.060 Run status group 0 (all jobs): 00:43:23.060 READ: bw=77.1MiB/s (80.8MB/s), 25.1MiB/s-26.1MiB/s (26.3MB/s-27.4MB/s), io=774MiB (812MB), run=10008-10044msec 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:23.060 00:43:23.060 real 0m11.072s 00:43:23.060 user 0m29.575s 00:43:23.060 sys 0m1.544s 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:23.060 12:57:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:23.060 ************************************ 00:43:23.060 END TEST fio_dif_digest 00:43:23.060 ************************************ 00:43:23.060 12:57:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:23.060 12:57:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:23.060 rmmod nvme_tcp 00:43:23.060 rmmod nvme_fabrics 00:43:23.060 rmmod nvme_keyring 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 876567 ']' 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 876567 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 876567 ']' 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 876567 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 876567 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 876567' 00:43:23.060 killing process with pid 876567 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@971 -- # kill 876567 00:43:23.060 12:57:50 nvmf_dif -- common/autotest_common.sh@976 -- # wait 876567 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:23.060 12:57:50 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:23.060 Waiting for block devices as requested 00:43:23.060 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:43:23.060 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:23.060 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:23.317 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:23.317 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:23.317 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:23.575 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:23.575 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:23.575 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:23.575 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:23.834 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:23.834 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:23.834 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:23.834 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:24.094 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:24.094 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:24.094 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:24.353 12:57:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:24.353 12:57:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:24.353 12:57:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:26.260 12:57:55 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:26.260 00:43:26.260 real 1m6.872s 00:43:26.260 user 6m29.156s 00:43:26.260 sys 0m17.714s 00:43:26.260 12:57:55 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:26.260 12:57:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:26.260 ************************************ 00:43:26.260 END TEST nvmf_dif 00:43:26.260 ************************************ 00:43:26.260 12:57:55 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:26.260 12:57:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:26.261 12:57:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:26.261 12:57:55 -- common/autotest_common.sh@10 -- # set +x 00:43:26.261 ************************************ 00:43:26.261 START TEST nvmf_abort_qd_sizes 00:43:26.261 ************************************ 00:43:26.261 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:26.520 * Looking for test storage... 00:43:26.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:26.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.520 --rc genhtml_branch_coverage=1 00:43:26.520 --rc genhtml_function_coverage=1 00:43:26.520 --rc genhtml_legend=1 00:43:26.520 --rc geninfo_all_blocks=1 00:43:26.520 --rc geninfo_unexecuted_blocks=1 00:43:26.520 00:43:26.520 ' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:26.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.520 --rc genhtml_branch_coverage=1 00:43:26.520 --rc genhtml_function_coverage=1 00:43:26.520 --rc genhtml_legend=1 00:43:26.520 --rc geninfo_all_blocks=1 00:43:26.520 --rc geninfo_unexecuted_blocks=1 00:43:26.520 00:43:26.520 ' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:26.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.520 --rc genhtml_branch_coverage=1 00:43:26.520 --rc genhtml_function_coverage=1 00:43:26.520 --rc genhtml_legend=1 00:43:26.520 --rc geninfo_all_blocks=1 00:43:26.520 --rc geninfo_unexecuted_blocks=1 00:43:26.520 00:43:26.520 ' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:26.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.520 --rc genhtml_branch_coverage=1 00:43:26.520 --rc genhtml_function_coverage=1 00:43:26.520 --rc genhtml_legend=1 00:43:26.520 --rc geninfo_all_blocks=1 00:43:26.520 --rc geninfo_unexecuted_blocks=1 00:43:26.520 00:43:26.520 ' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:26.520 12:57:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:26.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:26.521 12:57:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:43:29.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:43:29.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:43:29.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:43:29.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:29.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:29.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:43:29.051 00:43:29.051 --- 10.0.0.2 ping statistics --- 00:43:29.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:29.051 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:29.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:29.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:43:29.051 00:43:29.051 --- 10.0.0.1 ping statistics --- 00:43:29.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:29.051 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:29.051 12:57:57 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:29.985 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:29.985 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:29.985 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:30.918 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=888136 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 888136 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 888136 ']' 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:31.175 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:31.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:31.176 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:31.176 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:31.176 [2024-11-05 12:58:00.302889] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:43:31.176 [2024-11-05 12:58:00.302964] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:31.176 [2024-11-05 12:58:00.376665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:31.433 [2024-11-05 12:58:00.425029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:31.433 [2024-11-05 12:58:00.425081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:31.433 [2024-11-05 12:58:00.425110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:31.433 [2024-11-05 12:58:00.425122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:31.433 [2024-11-05 12:58:00.425131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:31.433 [2024-11-05 12:58:00.426542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:31.433 [2024-11-05 12:58:00.426606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:31.433 [2024-11-05 12:58:00.426716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:31.433 [2024-11-05 12:58:00.426724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:31.433 12:58:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:31.433 ************************************ 00:43:31.433 START TEST spdk_target_abort 00:43:31.433 ************************************ 00:43:31.433 12:58:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:43:31.433 12:58:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:31.433 12:58:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:43:31.433 12:58:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:31.433 12:58:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.722 spdk_targetn1 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.722 [2024-11-05 12:58:03.453610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:34.722 [2024-11-05 12:58:03.497973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:34.722 12:58:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:38.000 Initializing NVMe Controllers 00:43:38.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:38.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:38.000 Initialization complete. Launching workers. 00:43:38.000 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12825, failed: 0 00:43:38.000 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 11582 00:43:38.000 success 704, unsuccessful 539, failed 0 00:43:38.000 12:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:38.000 12:58:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:41.279 Initializing NVMe Controllers 00:43:41.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:41.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:41.279 Initialization complete. Launching workers. 00:43:41.279 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8640, failed: 0 00:43:41.279 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1205, failed to submit 7435 00:43:41.279 success 295, unsuccessful 910, failed 0 00:43:41.279 12:58:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:41.279 12:58:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:44.558 Initializing NVMe Controllers 00:43:44.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:44.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:44.558 Initialization complete. Launching workers. 00:43:44.558 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30906, failed: 0 00:43:44.558 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2601, failed to submit 28305 00:43:44.558 success 498, unsuccessful 2103, failed 0 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:44.558 12:58:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 888136 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 888136 ']' 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 888136 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 888136 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 888136' 00:43:45.491 killing process with pid 888136 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 888136 00:43:45.491 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 888136 00:43:45.749 00:43:45.749 real 0m14.283s 00:43:45.749 user 0m54.039s 00:43:45.749 sys 0m2.625s 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:45.749 ************************************ 00:43:45.749 END TEST spdk_target_abort 00:43:45.749 ************************************ 00:43:45.749 12:58:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:45.749 12:58:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:45.749 12:58:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:45.749 12:58:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:45.749 ************************************ 00:43:45.749 START TEST kernel_target_abort 00:43:45.749 ************************************ 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:45.749 12:58:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:47.128 Waiting for block devices as requested 00:43:47.128 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:43:47.128 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:47.385 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:47.385 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:47.385 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:47.644 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:47.644 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:47.644 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:47.645 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:47.903 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:47.903 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:47.903 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:47.903 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:48.162 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:48.162 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:48.162 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:48.162 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:48.421 No valid GPT data, bailing 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:43:48.421 00:43:48.421 Discovery Log Number of Records 2, Generation counter 2 00:43:48.421 =====Discovery Log Entry 0====== 00:43:48.421 trtype: tcp 00:43:48.421 adrfam: ipv4 00:43:48.421 subtype: current discovery subsystem 00:43:48.421 treq: not specified, sq flow control disable supported 00:43:48.421 portid: 1 00:43:48.421 trsvcid: 4420 00:43:48.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:48.421 traddr: 10.0.0.1 00:43:48.421 eflags: none 00:43:48.421 sectype: none 00:43:48.421 =====Discovery Log Entry 1====== 00:43:48.421 trtype: tcp 00:43:48.421 adrfam: ipv4 00:43:48.421 subtype: nvme subsystem 00:43:48.421 treq: not specified, sq flow control disable supported 00:43:48.421 portid: 1 00:43:48.421 trsvcid: 4420 00:43:48.421 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:48.421 traddr: 10.0.0.1 00:43:48.421 eflags: none 00:43:48.421 sectype: none 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:48.421 12:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:51.700 Initializing NVMe Controllers 00:43:51.700 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:51.700 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:51.700 Initialization complete. Launching workers. 00:43:51.700 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 49741, failed: 0 00:43:51.700 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 49741, failed to submit 0 00:43:51.700 success 0, unsuccessful 49741, failed 0 00:43:51.700 12:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:51.700 12:58:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:54.978 Initializing NVMe Controllers 00:43:54.978 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:54.978 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:54.978 Initialization complete. Launching workers. 00:43:54.978 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101971, failed: 0 00:43:54.978 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25690, failed to submit 76281 00:43:54.978 success 0, unsuccessful 25690, failed 0 00:43:54.978 12:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:54.978 12:58:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:58.318 Initializing NVMe Controllers 00:43:58.318 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:58.318 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:58.318 Initialization complete. Launching workers. 00:43:58.318 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97700, failed: 0 00:43:58.318 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24414, failed to submit 73286 00:43:58.318 success 0, unsuccessful 24414, failed 0 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:58.318 12:58:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:59.255 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:59.256 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:59.256 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:44:00.188 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:44:00.447 00:44:00.447 real 0m14.575s 00:44:00.447 user 0m6.645s 00:44:00.447 sys 0m3.445s 00:44:00.447 12:58:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:00.447 12:58:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.447 ************************************ 00:44:00.447 END TEST kernel_target_abort 00:44:00.447 ************************************ 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:00.447 rmmod nvme_tcp 00:44:00.447 rmmod nvme_fabrics 00:44:00.447 rmmod nvme_keyring 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 888136 ']' 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 888136 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 888136 ']' 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 888136 00:44:00.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (888136) - No such process 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 888136 is not found' 00:44:00.447 Process with pid 888136 is not found 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:00.447 12:58:29 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:01.824 Waiting for block devices as requested 00:44:01.824 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:44:01.824 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:01.824 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:02.083 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:02.083 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:02.083 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:02.083 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:02.341 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:02.341 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:02.341 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:44:02.599 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:44:02.599 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:44:02.599 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:44:02.599 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:44:02.857 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:44:02.857 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:44:02.857 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:02.857 12:58:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:05.387 12:58:34 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:05.387 00:44:05.387 real 0m38.629s 00:44:05.387 user 1m2.950s 00:44:05.387 sys 0m9.676s 00:44:05.387 12:58:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:05.387 12:58:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:05.387 ************************************ 00:44:05.387 END TEST nvmf_abort_qd_sizes 00:44:05.387 ************************************ 00:44:05.387 12:58:34 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:05.387 12:58:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:44:05.387 12:58:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:05.387 12:58:34 -- common/autotest_common.sh@10 -- # set +x 00:44:05.387 ************************************ 00:44:05.387 START TEST keyring_file 00:44:05.387 ************************************ 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:05.387 * Looking for test storage... 00:44:05.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.387 --rc genhtml_branch_coverage=1 00:44:05.387 --rc genhtml_function_coverage=1 00:44:05.387 --rc genhtml_legend=1 00:44:05.387 --rc geninfo_all_blocks=1 00:44:05.387 --rc geninfo_unexecuted_blocks=1 00:44:05.387 00:44:05.387 ' 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.387 --rc genhtml_branch_coverage=1 00:44:05.387 --rc genhtml_function_coverage=1 00:44:05.387 --rc genhtml_legend=1 00:44:05.387 --rc geninfo_all_blocks=1 00:44:05.387 --rc geninfo_unexecuted_blocks=1 00:44:05.387 00:44:05.387 ' 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.387 --rc genhtml_branch_coverage=1 00:44:05.387 --rc genhtml_function_coverage=1 00:44:05.387 --rc genhtml_legend=1 00:44:05.387 --rc geninfo_all_blocks=1 00:44:05.387 --rc geninfo_unexecuted_blocks=1 00:44:05.387 00:44:05.387 ' 00:44:05.387 12:58:34 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:05.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:05.387 --rc genhtml_branch_coverage=1 00:44:05.387 --rc genhtml_function_coverage=1 00:44:05.387 --rc genhtml_legend=1 00:44:05.387 --rc geninfo_all_blocks=1 00:44:05.387 --rc geninfo_unexecuted_blocks=1 00:44:05.387 00:44:05.387 ' 00:44:05.387 12:58:34 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:05.387 12:58:34 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:05.387 12:58:34 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:05.387 12:58:34 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:05.388 12:58:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.388 12:58:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.388 12:58:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.388 12:58:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:05.388 12:58:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:05.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.p3APCHlU04 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.p3APCHlU04 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.p3APCHlU04 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.p3APCHlU04 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TUpXbrEMi5 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:05.388 12:58:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TUpXbrEMi5 00:44:05.388 12:58:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TUpXbrEMi5 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TUpXbrEMi5 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=893897 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:05.388 12:58:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 893897 00:44:05.388 12:58:34 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 893897 ']' 00:44:05.388 12:58:34 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:05.388 12:58:34 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:05.388 12:58:34 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:05.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:05.388 12:58:34 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:05.388 12:58:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.388 [2024-11-05 12:58:34.451044] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:44:05.388 [2024-11-05 12:58:34.451124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893897 ] 00:44:05.388 [2024-11-05 12:58:34.516581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:05.388 [2024-11-05 12:58:34.562485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:05.646 12:58:34 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:05.646 12:58:34 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:44:05.646 12:58:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:05.646 12:58:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:05.646 12:58:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.646 [2024-11-05 12:58:34.822913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:05.646 null0 00:44:05.646 [2024-11-05 12:58:34.854962] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:05.647 [2024-11-05 12:58:34.855471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:05.647 12:58:34 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.647 [2024-11-05 12:58:34.878998] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:05.647 request: 00:44:05.647 { 00:44:05.647 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:05.647 "secure_channel": false, 00:44:05.647 "listen_address": { 00:44:05.647 "trtype": "tcp", 00:44:05.647 "traddr": "127.0.0.1", 00:44:05.647 "trsvcid": "4420" 00:44:05.647 }, 00:44:05.647 "method": "nvmf_subsystem_add_listener", 00:44:05.647 "req_id": 1 00:44:05.647 } 00:44:05.647 Got JSON-RPC error response 00:44:05.647 response: 00:44:05.647 { 00:44:05.647 "code": -32602, 00:44:05.647 "message": "Invalid parameters" 00:44:05.647 } 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:05.647 12:58:34 keyring_file -- keyring/file.sh@47 -- # bperfpid=893908 00:44:05.647 12:58:34 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:05.647 12:58:34 keyring_file -- keyring/file.sh@49 -- # waitforlisten 893908 /var/tmp/bperf.sock 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 893908 ']' 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:05.647 12:58:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:05.905 [2024-11-05 12:58:34.927702] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:44:05.905 [2024-11-05 12:58:34.927780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893908 ] 00:44:05.905 [2024-11-05 12:58:34.992813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:05.905 [2024-11-05 12:58:35.037760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.162 12:58:35 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:06.162 12:58:35 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:44:06.162 12:58:35 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:06.162 12:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:06.420 12:58:35 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TUpXbrEMi5 00:44:06.420 12:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TUpXbrEMi5 00:44:06.677 12:58:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:06.677 12:58:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:06.677 12:58:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:06.677 12:58:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:06.677 12:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:06.935 12:58:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.p3APCHlU04 == \/\t\m\p\/\t\m\p\.\p\3\A\P\C\H\l\U\0\4 ]] 00:44:06.935 12:58:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:06.935 12:58:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:06.935 12:58:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:06.935 12:58:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:06.935 12:58:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:07.193 12:58:36 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.TUpXbrEMi5 == \/\t\m\p\/\t\m\p\.\T\U\p\X\b\r\E\M\i\5 ]] 00:44:07.193 12:58:36 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:07.193 12:58:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:07.193 12:58:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.193 12:58:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.193 12:58:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.193 12:58:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:07.450 12:58:36 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:07.450 12:58:36 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:07.450 12:58:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:07.450 12:58:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.450 12:58:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.450 12:58:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:07.450 12:58:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:07.708 12:58:36 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:07.708 12:58:36 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:07.708 12:58:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:07.966 [2024-11-05 12:58:37.039585] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:07.966 nvme0n1 00:44:07.966 12:58:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:07.966 12:58:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:07.966 12:58:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:07.966 12:58:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:07.966 12:58:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:07.966 12:58:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:08.224 12:58:37 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:08.224 12:58:37 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:08.224 12:58:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:08.224 12:58:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:08.224 12:58:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:08.224 12:58:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:08.224 12:58:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:08.481 12:58:37 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:08.482 12:58:37 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:08.739 Running I/O for 1 seconds... 00:44:09.671 10395.00 IOPS, 40.61 MiB/s 00:44:09.671 Latency(us) 00:44:09.671 [2024-11-05T11:58:38.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:09.671 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:09.671 nvme0n1 : 1.01 10446.26 40.81 0.00 0.00 12211.99 3907.89 17670.45 00:44:09.671 [2024-11-05T11:58:38.909Z] =================================================================================================================== 00:44:09.671 [2024-11-05T11:58:38.909Z] Total : 10446.26 40.81 0.00 0.00 12211.99 3907.89 17670.45 00:44:09.671 { 00:44:09.671 "results": [ 00:44:09.671 { 00:44:09.671 "job": "nvme0n1", 00:44:09.671 "core_mask": "0x2", 00:44:09.671 "workload": "randrw", 00:44:09.671 "percentage": 50, 00:44:09.671 "status": "finished", 00:44:09.671 "queue_depth": 128, 00:44:09.671 "io_size": 4096, 00:44:09.671 "runtime": 1.007442, 00:44:09.671 "iops": 10446.258940961365, 00:44:09.671 "mibps": 40.80569898813033, 00:44:09.671 "io_failed": 0, 00:44:09.671 "io_timeout": 0, 00:44:09.671 "avg_latency_us": 12211.987853090643, 00:44:09.671 "min_latency_us": 3907.8874074074074, 00:44:09.671 "max_latency_us": 17670.447407407406 00:44:09.671 } 00:44:09.671 ], 00:44:09.671 "core_count": 1 00:44:09.671 } 00:44:09.671 12:58:38 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:09.671 12:58:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:09.928 12:58:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:09.928 12:58:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:09.928 12:58:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:09.928 12:58:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:09.928 12:58:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:09.928 12:58:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:10.186 12:58:39 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:10.186 12:58:39 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:10.186 12:58:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:10.186 12:58:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:10.186 12:58:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.186 12:58:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:10.186 12:58:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:10.444 12:58:39 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:10.444 12:58:39 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:10.444 12:58:39 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:10.444 12:58:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:10.701 [2024-11-05 12:58:39.912609] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:10.701 [2024-11-05 12:58:39.913168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d044d0 (107): Transport endpoint is not connected 00:44:10.701 [2024-11-05 12:58:39.914160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d044d0 (9): Bad file descriptor 00:44:10.701 [2024-11-05 12:58:39.915159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:10.701 [2024-11-05 12:58:39.915177] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:10.701 [2024-11-05 12:58:39.915192] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:10.701 [2024-11-05 12:58:39.915206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:10.701 request: 00:44:10.701 { 00:44:10.701 "name": "nvme0", 00:44:10.701 "trtype": "tcp", 00:44:10.701 "traddr": "127.0.0.1", 00:44:10.701 "adrfam": "ipv4", 00:44:10.701 "trsvcid": "4420", 00:44:10.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:10.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:10.701 "prchk_reftag": false, 00:44:10.701 "prchk_guard": false, 00:44:10.701 "hdgst": false, 00:44:10.701 "ddgst": false, 00:44:10.701 "psk": "key1", 00:44:10.701 "allow_unrecognized_csi": false, 00:44:10.701 "method": "bdev_nvme_attach_controller", 00:44:10.701 "req_id": 1 00:44:10.701 } 00:44:10.701 Got JSON-RPC error response 00:44:10.701 response: 00:44:10.701 { 00:44:10.701 "code": -5, 00:44:10.701 "message": "Input/output error" 00:44:10.701 } 00:44:10.701 12:58:39 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:10.701 12:58:39 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:10.701 12:58:39 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:10.701 12:58:39 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:10.701 12:58:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:10.701 12:58:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:10.701 12:58:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:10.701 12:58:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:10.701 12:58:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:10.701 12:58:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.266 12:58:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:11.266 12:58:40 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:11.266 12:58:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:11.266 12:58:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:11.266 12:58:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:11.266 12:58:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.266 12:58:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:11.266 12:58:40 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:11.266 12:58:40 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:11.266 12:58:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:11.832 12:58:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:11.832 12:58:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:11.832 12:58:41 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:11.832 12:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:11.832 12:58:41 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:12.089 12:58:41 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:12.089 12:58:41 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.p3APCHlU04 00:44:12.089 12:58:41 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:12.089 12:58:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:12.089 12:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:12.347 [2024-11-05 12:58:41.552269] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.p3APCHlU04': 0100660 00:44:12.347 [2024-11-05 12:58:41.552306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:12.347 request: 00:44:12.347 { 00:44:12.347 "name": "key0", 00:44:12.347 "path": "/tmp/tmp.p3APCHlU04", 00:44:12.347 "method": "keyring_file_add_key", 00:44:12.347 "req_id": 1 00:44:12.347 } 00:44:12.347 Got JSON-RPC error response 00:44:12.347 response: 00:44:12.347 { 00:44:12.347 "code": -1, 00:44:12.347 "message": "Operation not permitted" 00:44:12.347 } 00:44:12.347 12:58:41 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:12.347 12:58:41 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:12.347 12:58:41 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:12.347 12:58:41 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:12.347 12:58:41 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.p3APCHlU04 00:44:12.347 12:58:41 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:12.347 12:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.p3APCHlU04 00:44:12.604 12:58:41 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.p3APCHlU04 00:44:12.604 12:58:41 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:12.862 12:58:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:12.862 12:58:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:12.862 12:58:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:12.862 12:58:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:12.862 12:58:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:13.120 12:58:42 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:13.120 12:58:42 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:13.120 12:58:42 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:13.120 12:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:13.377 [2024-11-05 12:58:42.398560] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.p3APCHlU04': No such file or directory 00:44:13.377 [2024-11-05 12:58:42.398593] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:13.377 [2024-11-05 12:58:42.398631] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:13.377 [2024-11-05 12:58:42.398652] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:13.377 [2024-11-05 12:58:42.398666] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:13.377 [2024-11-05 12:58:42.398676] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:13.377 request: 00:44:13.377 { 00:44:13.377 "name": "nvme0", 00:44:13.377 "trtype": "tcp", 00:44:13.377 "traddr": "127.0.0.1", 00:44:13.377 "adrfam": "ipv4", 00:44:13.377 "trsvcid": "4420", 00:44:13.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:13.377 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:13.377 "prchk_reftag": false, 00:44:13.377 "prchk_guard": false, 00:44:13.377 "hdgst": false, 00:44:13.377 "ddgst": false, 00:44:13.377 "psk": "key0", 00:44:13.377 "allow_unrecognized_csi": false, 00:44:13.377 "method": "bdev_nvme_attach_controller", 00:44:13.377 "req_id": 1 00:44:13.377 } 00:44:13.377 Got JSON-RPC error response 00:44:13.377 response: 00:44:13.377 { 00:44:13.377 "code": -19, 00:44:13.377 "message": "No such device" 00:44:13.377 } 00:44:13.377 12:58:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:44:13.377 12:58:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:13.377 12:58:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:13.377 12:58:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:13.377 12:58:42 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:13.377 12:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:13.635 12:58:42 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.y2m3TOXFxd 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:13.635 12:58:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:13.635 12:58:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:13.635 12:58:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:13.635 12:58:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:13.635 12:58:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:13.635 12:58:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.y2m3TOXFxd 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.y2m3TOXFxd 00:44:13.635 12:58:42 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.y2m3TOXFxd 00:44:13.635 12:58:42 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y2m3TOXFxd 00:44:13.635 12:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y2m3TOXFxd 00:44:13.893 12:58:42 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:13.893 12:58:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:14.150 nvme0n1 00:44:14.150 12:58:43 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:14.150 12:58:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:14.150 12:58:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:14.150 12:58:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:14.150 12:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:14.150 12:58:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:14.407 12:58:43 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:14.407 12:58:43 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:14.407 12:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:14.665 12:58:43 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:14.665 12:58:43 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:14.665 12:58:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:14.665 12:58:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:14.665 12:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:14.923 12:58:44 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:14.923 12:58:44 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:14.923 12:58:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:14.923 12:58:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:14.923 12:58:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:14.923 12:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:14.923 12:58:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:15.488 12:58:44 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:15.488 12:58:44 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:15.488 12:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:15.488 12:58:44 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:15.488 12:58:44 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:15.488 12:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:15.746 12:58:44 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:15.746 12:58:44 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y2m3TOXFxd 00:44:15.746 12:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y2m3TOXFxd 00:44:16.004 12:58:45 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TUpXbrEMi5 00:44:16.004 12:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TUpXbrEMi5 00:44:16.569 12:58:45 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:16.569 12:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:16.827 nvme0n1 00:44:16.827 12:58:45 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:16.827 12:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:17.085 12:58:46 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:17.085 "subsystems": [ 00:44:17.085 { 00:44:17.085 "subsystem": "keyring", 00:44:17.085 "config": [ 00:44:17.085 { 00:44:17.085 "method": "keyring_file_add_key", 00:44:17.085 "params": { 00:44:17.085 "name": "key0", 00:44:17.085 "path": "/tmp/tmp.y2m3TOXFxd" 00:44:17.085 } 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "method": "keyring_file_add_key", 00:44:17.085 "params": { 00:44:17.085 "name": "key1", 00:44:17.085 "path": "/tmp/tmp.TUpXbrEMi5" 00:44:17.085 } 00:44:17.085 } 00:44:17.085 ] 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "subsystem": "iobuf", 00:44:17.085 "config": [ 00:44:17.085 { 00:44:17.085 "method": "iobuf_set_options", 00:44:17.085 "params": { 00:44:17.085 "small_pool_count": 8192, 00:44:17.085 "large_pool_count": 1024, 00:44:17.085 "small_bufsize": 8192, 00:44:17.085 "large_bufsize": 135168, 00:44:17.085 "enable_numa": false 00:44:17.085 } 00:44:17.085 } 00:44:17.085 ] 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "subsystem": "sock", 00:44:17.085 "config": [ 00:44:17.085 { 00:44:17.085 "method": "sock_set_default_impl", 00:44:17.085 "params": { 00:44:17.085 "impl_name": "posix" 00:44:17.085 } 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "method": "sock_impl_set_options", 00:44:17.085 "params": { 00:44:17.085 "impl_name": "ssl", 00:44:17.085 "recv_buf_size": 4096, 00:44:17.085 "send_buf_size": 4096, 00:44:17.085 "enable_recv_pipe": true, 00:44:17.085 "enable_quickack": false, 00:44:17.085 "enable_placement_id": 0, 00:44:17.085 "enable_zerocopy_send_server": true, 00:44:17.085 "enable_zerocopy_send_client": false, 00:44:17.085 "zerocopy_threshold": 0, 00:44:17.085 "tls_version": 0, 00:44:17.085 "enable_ktls": false 00:44:17.085 } 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "method": "sock_impl_set_options", 00:44:17.085 "params": { 00:44:17.085 "impl_name": "posix", 00:44:17.085 "recv_buf_size": 2097152, 00:44:17.085 "send_buf_size": 2097152, 00:44:17.085 "enable_recv_pipe": true, 00:44:17.085 "enable_quickack": false, 00:44:17.085 "enable_placement_id": 0, 00:44:17.085 "enable_zerocopy_send_server": true, 00:44:17.085 "enable_zerocopy_send_client": false, 00:44:17.085 "zerocopy_threshold": 0, 00:44:17.085 "tls_version": 0, 00:44:17.085 "enable_ktls": false 00:44:17.085 } 00:44:17.085 } 00:44:17.085 ] 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "subsystem": "vmd", 00:44:17.085 "config": [] 00:44:17.085 }, 00:44:17.085 { 00:44:17.085 "subsystem": "accel", 00:44:17.085 "config": [ 00:44:17.085 { 00:44:17.085 "method": "accel_set_options", 00:44:17.085 "params": { 00:44:17.085 "small_cache_size": 128, 00:44:17.085 "large_cache_size": 16, 00:44:17.085 "task_count": 2048, 00:44:17.085 "sequence_count": 2048, 00:44:17.086 "buf_count": 2048 00:44:17.086 } 00:44:17.086 } 00:44:17.086 ] 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "subsystem": "bdev", 00:44:17.086 "config": [ 00:44:17.086 { 00:44:17.086 "method": "bdev_set_options", 00:44:17.086 "params": { 00:44:17.086 "bdev_io_pool_size": 65535, 00:44:17.086 "bdev_io_cache_size": 256, 00:44:17.086 "bdev_auto_examine": true, 00:44:17.086 "iobuf_small_cache_size": 128, 00:44:17.086 "iobuf_large_cache_size": 16 00:44:17.086 } 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "method": "bdev_raid_set_options", 00:44:17.086 "params": { 00:44:17.086 "process_window_size_kb": 1024, 00:44:17.086 "process_max_bandwidth_mb_sec": 0 00:44:17.086 } 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "method": "bdev_iscsi_set_options", 00:44:17.086 "params": { 00:44:17.086 "timeout_sec": 30 00:44:17.086 } 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "method": "bdev_nvme_set_options", 00:44:17.086 "params": { 00:44:17.086 "action_on_timeout": "none", 00:44:17.086 "timeout_us": 0, 00:44:17.086 "timeout_admin_us": 0, 00:44:17.086 "keep_alive_timeout_ms": 10000, 00:44:17.086 "arbitration_burst": 0, 00:44:17.086 "low_priority_weight": 0, 00:44:17.086 "medium_priority_weight": 0, 00:44:17.086 "high_priority_weight": 0, 00:44:17.086 "nvme_adminq_poll_period_us": 10000, 00:44:17.086 "nvme_ioq_poll_period_us": 0, 00:44:17.086 "io_queue_requests": 512, 00:44:17.086 "delay_cmd_submit": true, 00:44:17.086 "transport_retry_count": 4, 00:44:17.086 "bdev_retry_count": 3, 00:44:17.086 "transport_ack_timeout": 0, 00:44:17.086 "ctrlr_loss_timeout_sec": 0, 00:44:17.086 "reconnect_delay_sec": 0, 00:44:17.086 "fast_io_fail_timeout_sec": 0, 00:44:17.086 "disable_auto_failback": false, 00:44:17.086 "generate_uuids": false, 00:44:17.086 "transport_tos": 0, 00:44:17.086 "nvme_error_stat": false, 00:44:17.086 "rdma_srq_size": 0, 00:44:17.086 "io_path_stat": false, 00:44:17.086 "allow_accel_sequence": false, 00:44:17.086 "rdma_max_cq_size": 0, 00:44:17.086 "rdma_cm_event_timeout_ms": 0, 00:44:17.086 "dhchap_digests": [ 00:44:17.086 "sha256", 00:44:17.086 "sha384", 00:44:17.086 "sha512" 00:44:17.086 ], 00:44:17.086 "dhchap_dhgroups": [ 00:44:17.086 "null", 00:44:17.086 "ffdhe2048", 00:44:17.086 "ffdhe3072", 00:44:17.086 "ffdhe4096", 00:44:17.086 "ffdhe6144", 00:44:17.086 "ffdhe8192" 00:44:17.086 ] 00:44:17.086 } 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "method": "bdev_nvme_attach_controller", 00:44:17.086 "params": { 00:44:17.086 "name": "nvme0", 00:44:17.086 "trtype": "TCP", 00:44:17.086 "adrfam": "IPv4", 00:44:17.086 "traddr": "127.0.0.1", 00:44:17.086 "trsvcid": "4420", 00:44:17.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:17.086 "prchk_reftag": false, 00:44:17.086 "prchk_guard": false, 00:44:17.086 "ctrlr_loss_timeout_sec": 0, 00:44:17.086 "reconnect_delay_sec": 0, 00:44:17.086 "fast_io_fail_timeout_sec": 0, 00:44:17.086 "psk": "key0", 00:44:17.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:17.086 "hdgst": false, 00:44:17.086 "ddgst": false, 00:44:17.086 "multipath": "multipath" 00:44:17.086 } 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "method": "bdev_nvme_set_hotplug", 00:44:17.086 "params": { 00:44:17.086 "period_us": 100000, 00:44:17.086 "enable": false 00:44:17.086 } 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "method": "bdev_wait_for_examine" 00:44:17.086 } 00:44:17.086 ] 00:44:17.086 }, 00:44:17.086 { 00:44:17.086 "subsystem": "nbd", 00:44:17.086 "config": [] 00:44:17.086 } 00:44:17.086 ] 00:44:17.086 }' 00:44:17.086 12:58:46 keyring_file -- keyring/file.sh@115 -- # killprocess 893908 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 893908 ']' 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@956 -- # kill -0 893908 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@957 -- # uname 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 893908 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 893908' 00:44:17.086 killing process with pid 893908 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@971 -- # kill 893908 00:44:17.086 Received shutdown signal, test time was about 1.000000 seconds 00:44:17.086 00:44:17.086 Latency(us) 00:44:17.086 [2024-11-05T11:58:46.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:17.086 [2024-11-05T11:58:46.324Z] =================================================================================================================== 00:44:17.086 [2024-11-05T11:58:46.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:17.086 12:58:46 keyring_file -- common/autotest_common.sh@976 -- # wait 893908 00:44:17.344 12:58:46 keyring_file -- keyring/file.sh@118 -- # bperfpid=895368 00:44:17.344 12:58:46 keyring_file -- keyring/file.sh@120 -- # waitforlisten 895368 /var/tmp/bperf.sock 00:44:17.344 12:58:46 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 895368 ']' 00:44:17.344 12:58:46 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:17.344 12:58:46 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:17.344 12:58:46 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:17.344 12:58:46 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:17.344 "subsystems": [ 00:44:17.344 { 00:44:17.344 "subsystem": "keyring", 00:44:17.344 "config": [ 00:44:17.344 { 00:44:17.344 "method": "keyring_file_add_key", 00:44:17.344 "params": { 00:44:17.344 "name": "key0", 00:44:17.344 "path": "/tmp/tmp.y2m3TOXFxd" 00:44:17.344 } 00:44:17.344 }, 00:44:17.344 { 00:44:17.344 "method": "keyring_file_add_key", 00:44:17.344 "params": { 00:44:17.344 "name": "key1", 00:44:17.344 "path": "/tmp/tmp.TUpXbrEMi5" 00:44:17.344 } 00:44:17.344 } 00:44:17.344 ] 00:44:17.344 }, 00:44:17.344 { 00:44:17.344 "subsystem": "iobuf", 00:44:17.344 "config": [ 00:44:17.344 { 00:44:17.344 "method": "iobuf_set_options", 00:44:17.344 "params": { 00:44:17.344 "small_pool_count": 8192, 00:44:17.344 "large_pool_count": 1024, 00:44:17.344 "small_bufsize": 8192, 00:44:17.344 "large_bufsize": 135168, 00:44:17.344 "enable_numa": false 00:44:17.344 } 00:44:17.344 } 00:44:17.344 ] 00:44:17.344 }, 00:44:17.344 { 00:44:17.344 "subsystem": "sock", 00:44:17.344 "config": [ 00:44:17.344 { 00:44:17.344 "method": "sock_set_default_impl", 00:44:17.344 "params": { 00:44:17.344 "impl_name": "posix" 00:44:17.344 } 00:44:17.344 }, 00:44:17.344 { 00:44:17.344 "method": "sock_impl_set_options", 00:44:17.344 "params": { 00:44:17.344 "impl_name": "ssl", 00:44:17.344 "recv_buf_size": 4096, 00:44:17.344 "send_buf_size": 4096, 00:44:17.344 "enable_recv_pipe": true, 00:44:17.344 "enable_quickack": false, 00:44:17.344 "enable_placement_id": 0, 00:44:17.344 "enable_zerocopy_send_server": true, 00:44:17.344 "enable_zerocopy_send_client": false, 00:44:17.345 "zerocopy_threshold": 0, 00:44:17.345 "tls_version": 0, 00:44:17.345 "enable_ktls": false 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "sock_impl_set_options", 00:44:17.345 "params": { 00:44:17.345 "impl_name": "posix", 00:44:17.345 "recv_buf_size": 2097152, 00:44:17.345 "send_buf_size": 2097152, 00:44:17.345 "enable_recv_pipe": true, 00:44:17.345 "enable_quickack": false, 00:44:17.345 "enable_placement_id": 0, 00:44:17.345 "enable_zerocopy_send_server": true, 00:44:17.345 "enable_zerocopy_send_client": false, 00:44:17.345 "zerocopy_threshold": 0, 00:44:17.345 "tls_version": 0, 00:44:17.345 "enable_ktls": false 00:44:17.345 } 00:44:17.345 } 00:44:17.345 ] 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "subsystem": "vmd", 00:44:17.345 "config": [] 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "subsystem": "accel", 00:44:17.345 "config": [ 00:44:17.345 { 00:44:17.345 "method": "accel_set_options", 00:44:17.345 "params": { 00:44:17.345 "small_cache_size": 128, 00:44:17.345 "large_cache_size": 16, 00:44:17.345 "task_count": 2048, 00:44:17.345 "sequence_count": 2048, 00:44:17.345 "buf_count": 2048 00:44:17.345 } 00:44:17.345 } 00:44:17.345 ] 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "subsystem": "bdev", 00:44:17.345 "config": [ 00:44:17.345 { 00:44:17.345 "method": "bdev_set_options", 00:44:17.345 "params": { 00:44:17.345 "bdev_io_pool_size": 65535, 00:44:17.345 "bdev_io_cache_size": 256, 00:44:17.345 "bdev_auto_examine": true, 00:44:17.345 "iobuf_small_cache_size": 128, 00:44:17.345 "iobuf_large_cache_size": 16 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "bdev_raid_set_options", 00:44:17.345 "params": { 00:44:17.345 "process_window_size_kb": 1024, 00:44:17.345 "process_max_bandwidth_mb_sec": 0 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "bdev_iscsi_set_options", 00:44:17.345 "params": { 00:44:17.345 "timeout_sec": 30 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "bdev_nvme_set_options", 00:44:17.345 "params": { 00:44:17.345 "action_on_timeout": "none", 00:44:17.345 "timeout_us": 0, 00:44:17.345 "timeout_admin_us": 0, 00:44:17.345 "keep_alive_timeout_ms": 10000, 00:44:17.345 "arbitration_burst": 0, 00:44:17.345 "low_priority_weight": 0, 00:44:17.345 "medium_priority_weight": 0, 00:44:17.345 "high_priority_weight": 0, 00:44:17.345 "nvme_adminq_poll_period_us": 10000, 00:44:17.345 "nvme_ioq_poll_period_us": 0, 00:44:17.345 "io_queue_requests": 512, 00:44:17.345 "delay_cmd_submit": true, 00:44:17.345 "transport_retry_count": 4, 00:44:17.345 "bdev_retry_count": 3, 00:44:17.345 "transport_ack_timeout": 0, 00:44:17.345 "ctrlr_loss_timeout_sec": 0, 00:44:17.345 "reconnect_delay_sec": 0, 00:44:17.345 "fast_io_fail_timeout_sec": 0, 00:44:17.345 "disable_auto_failback": false, 00:44:17.345 "generate_uuids": false, 00:44:17.345 "transport_tos": 0, 00:44:17.345 "nvme_error_stat": false, 00:44:17.345 "rdma_srq_size": 0, 00:44:17.345 12:58:46 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:17.345 "io_path_stat": false, 00:44:17.345 "allow_accel_sequence": false, 00:44:17.345 "rdma_max_cq_size": 0, 00:44:17.345 "rdma_cm_event_timeout_ms": 0, 00:44:17.345 "dhchap_digests": [ 00:44:17.345 "sha256", 00:44:17.345 "sha384", 00:44:17.345 "sha512" 00:44:17.345 ], 00:44:17.345 "dhchap_dhgroups": [ 00:44:17.345 "null", 00:44:17.345 "ffdhe2048", 00:44:17.345 "ffdhe3072", 00:44:17.345 "ffdhe4096", 00:44:17.345 "ffdhe6144", 00:44:17.345 "ffdhe8192" 00:44:17.345 ] 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "bdev_nvme_attach_controller", 00:44:17.345 "params": { 00:44:17.345 "name": "nvme0", 00:44:17.345 "trtype": "TCP", 00:44:17.345 "adrfam": "IPv4", 00:44:17.345 "traddr": "127.0.0.1", 00:44:17.345 "trsvcid": "4420", 00:44:17.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:17.345 "prchk_reftag": false, 00:44:17.345 "prchk_guard": false, 00:44:17.345 "ctrlr_loss_timeout_sec": 0, 00:44:17.345 "reconnect_delay_sec": 0, 00:44:17.345 "fast_io_fail_timeout_sec": 0, 00:44:17.345 "psk": "key0", 00:44:17.345 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:17.345 "hdgst": false, 00:44:17.345 "ddgst": false, 00:44:17.345 "multipath": "multipath" 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "bdev_nvme_set_hotplug", 00:44:17.345 "params": { 00:44:17.345 "period_us": 100000, 00:44:17.345 "enable": false 00:44:17.345 } 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "method": "bdev_wait_for_examine" 00:44:17.345 } 00:44:17.345 ] 00:44:17.345 }, 00:44:17.345 { 00:44:17.345 "subsystem": "nbd", 00:44:17.345 "config": [] 00:44:17.345 } 00:44:17.345 ] 00:44:17.345 }' 00:44:17.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:17.345 12:58:46 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:17.345 12:58:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:17.345 [2024-11-05 12:58:46.446090] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:44:17.345 [2024-11-05 12:58:46.446178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895368 ] 00:44:17.345 [2024-11-05 12:58:46.516644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:17.345 [2024-11-05 12:58:46.565016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:17.603 [2024-11-05 12:58:46.745574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:17.861 12:58:46 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:17.861 12:58:46 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:44:17.861 12:58:46 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:17.861 12:58:46 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:17.861 12:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:18.119 12:58:47 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:18.119 12:58:47 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:18.119 12:58:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:18.119 12:58:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:18.119 12:58:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:18.119 12:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:18.119 12:58:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:18.376 12:58:47 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:18.376 12:58:47 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:18.376 12:58:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:18.377 12:58:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:18.377 12:58:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:18.377 12:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:18.377 12:58:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:18.634 12:58:47 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:18.634 12:58:47 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:18.634 12:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:18.634 12:58:47 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:18.893 12:58:47 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:18.893 12:58:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:18.893 12:58:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.y2m3TOXFxd /tmp/tmp.TUpXbrEMi5 00:44:18.893 12:58:47 keyring_file -- keyring/file.sh@20 -- # killprocess 895368 00:44:18.893 12:58:47 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 895368 ']' 00:44:18.893 12:58:47 keyring_file -- common/autotest_common.sh@956 -- # kill -0 895368 00:44:18.893 12:58:47 keyring_file -- common/autotest_common.sh@957 -- # uname 00:44:18.893 12:58:47 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:18.893 12:58:47 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 895368 00:44:18.893 12:58:48 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:18.893 12:58:48 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:18.893 12:58:48 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 895368' 00:44:18.893 killing process with pid 895368 00:44:18.893 12:58:48 keyring_file -- common/autotest_common.sh@971 -- # kill 895368 00:44:18.893 Received shutdown signal, test time was about 1.000000 seconds 00:44:18.893 00:44:18.893 Latency(us) 00:44:18.893 [2024-11-05T11:58:48.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:18.893 [2024-11-05T11:58:48.131Z] =================================================================================================================== 00:44:18.893 [2024-11-05T11:58:48.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:18.893 12:58:48 keyring_file -- common/autotest_common.sh@976 -- # wait 895368 00:44:19.150 12:58:48 keyring_file -- keyring/file.sh@21 -- # killprocess 893897 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 893897 ']' 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@956 -- # kill -0 893897 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@957 -- # uname 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 893897 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 893897' 00:44:19.150 killing process with pid 893897 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@971 -- # kill 893897 00:44:19.150 12:58:48 keyring_file -- common/autotest_common.sh@976 -- # wait 893897 00:44:19.410 00:44:19.410 real 0m14.409s 00:44:19.410 user 0m36.807s 00:44:19.410 sys 0m3.274s 00:44:19.410 12:58:48 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:19.410 12:58:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:19.410 ************************************ 00:44:19.410 END TEST keyring_file 00:44:19.410 ************************************ 00:44:19.410 12:58:48 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:44:19.410 12:58:48 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:19.410 12:58:48 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:44:19.410 12:58:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:19.410 12:58:48 -- common/autotest_common.sh@10 -- # set +x 00:44:19.410 ************************************ 00:44:19.410 START TEST keyring_linux 00:44:19.410 ************************************ 00:44:19.410 12:58:48 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:19.410 Joined session keyring: 458839344 00:44:19.669 * Looking for test storage... 00:44:19.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:19.669 12:58:48 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:19.669 12:58:48 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:44:19.669 12:58:48 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:19.669 12:58:48 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:19.669 12:58:48 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:19.669 12:58:48 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:19.669 12:58:48 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:19.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.670 --rc genhtml_branch_coverage=1 00:44:19.670 --rc genhtml_function_coverage=1 00:44:19.670 --rc genhtml_legend=1 00:44:19.670 --rc geninfo_all_blocks=1 00:44:19.670 --rc geninfo_unexecuted_blocks=1 00:44:19.670 00:44:19.670 ' 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:19.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.670 --rc genhtml_branch_coverage=1 00:44:19.670 --rc genhtml_function_coverage=1 00:44:19.670 --rc genhtml_legend=1 00:44:19.670 --rc geninfo_all_blocks=1 00:44:19.670 --rc geninfo_unexecuted_blocks=1 00:44:19.670 00:44:19.670 ' 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:19.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.670 --rc genhtml_branch_coverage=1 00:44:19.670 --rc genhtml_function_coverage=1 00:44:19.670 --rc genhtml_legend=1 00:44:19.670 --rc geninfo_all_blocks=1 00:44:19.670 --rc geninfo_unexecuted_blocks=1 00:44:19.670 00:44:19.670 ' 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:19.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:19.670 --rc genhtml_branch_coverage=1 00:44:19.670 --rc genhtml_function_coverage=1 00:44:19.670 --rc genhtml_legend=1 00:44:19.670 --rc geninfo_all_blocks=1 00:44:19.670 --rc geninfo_unexecuted_blocks=1 00:44:19.670 00:44:19.670 ' 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:19.670 12:58:48 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:19.670 12:58:48 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:19.670 12:58:48 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:19.670 12:58:48 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:19.670 12:58:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.670 12:58:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.670 12:58:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.670 12:58:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:19.670 12:58:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:19.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:19.670 /tmp/:spdk-test:key0 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:19.670 12:58:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:19.670 12:58:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:19.670 /tmp/:spdk-test:key1 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=895855 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:19.670 12:58:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 895855 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 895855 ']' 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:19.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:19.670 12:58:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:19.929 [2024-11-05 12:58:48.943953] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:44:19.929 [2024-11-05 12:58:48.944031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895855 ] 00:44:19.929 [2024-11-05 12:58:49.008812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:19.929 [2024-11-05 12:58:49.058145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:44:20.187 12:58:49 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:20.187 [2024-11-05 12:58:49.318748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:20.187 null0 00:44:20.187 [2024-11-05 12:58:49.350807] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:20.187 [2024-11-05 12:58:49.351356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:20.187 12:58:49 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:20.187 267512138 00:44:20.187 12:58:49 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:20.187 510405351 00:44:20.187 12:58:49 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=895865 00:44:20.187 12:58:49 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:20.187 12:58:49 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 895865 /var/tmp/bperf.sock 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 895865 ']' 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:20.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:20.187 12:58:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:20.187 [2024-11-05 12:58:49.417187] Starting SPDK v25.01-pre git sha1 f220d590c / DPDK 23.11.0 initialization... 00:44:20.187 [2024-11-05 12:58:49.417267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid895865 ] 00:44:20.445 [2024-11-05 12:58:49.483308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:20.445 [2024-11-05 12:58:49.527668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:20.445 12:58:49 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:20.445 12:58:49 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:44:20.445 12:58:49 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:20.445 12:58:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:20.702 12:58:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:20.702 12:58:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:21.267 12:58:50 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:21.267 12:58:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:21.525 [2024-11-05 12:58:50.550053] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:21.525 nvme0n1 00:44:21.525 12:58:50 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:21.525 12:58:50 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:21.525 12:58:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:21.525 12:58:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:21.525 12:58:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:21.525 12:58:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:21.782 12:58:50 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:21.782 12:58:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:21.782 12:58:50 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:21.782 12:58:50 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:21.782 12:58:50 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:21.782 12:58:50 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:21.782 12:58:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:22.072 12:58:51 keyring_linux -- keyring/linux.sh@25 -- # sn=267512138 00:44:22.072 12:58:51 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:22.072 12:58:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:22.072 12:58:51 keyring_linux -- keyring/linux.sh@26 -- # [[ 267512138 == \2\6\7\5\1\2\1\3\8 ]] 00:44:22.072 12:58:51 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 267512138 00:44:22.073 12:58:51 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:22.073 12:58:51 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:22.354 Running I/O for 1 seconds... 00:44:23.288 11161.00 IOPS, 43.60 MiB/s 00:44:23.288 Latency(us) 00:44:23.288 [2024-11-05T11:58:52.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:23.288 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:23.288 nvme0n1 : 1.01 11168.72 43.63 0.00 0.00 11392.12 3689.43 15243.19 00:44:23.288 [2024-11-05T11:58:52.526Z] =================================================================================================================== 00:44:23.288 [2024-11-05T11:58:52.526Z] Total : 11168.72 43.63 0.00 0.00 11392.12 3689.43 15243.19 00:44:23.288 { 00:44:23.288 "results": [ 00:44:23.288 { 00:44:23.288 "job": "nvme0n1", 00:44:23.288 "core_mask": "0x2", 00:44:23.288 "workload": "randread", 00:44:23.288 "status": "finished", 00:44:23.288 "queue_depth": 128, 00:44:23.288 "io_size": 4096, 00:44:23.288 "runtime": 1.010859, 00:44:23.288 "iops": 11168.71888166401, 00:44:23.288 "mibps": 43.62780813150004, 00:44:23.288 "io_failed": 0, 00:44:23.288 "io_timeout": 0, 00:44:23.288 "avg_latency_us": 11392.118816651906, 00:44:23.288 "min_latency_us": 3689.434074074074, 00:44:23.288 "max_latency_us": 15243.188148148149 00:44:23.288 } 00:44:23.288 ], 00:44:23.288 "core_count": 1 00:44:23.288 } 00:44:23.288 12:58:52 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:23.288 12:58:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:23.546 12:58:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:23.546 12:58:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:23.546 12:58:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:23.546 12:58:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:23.546 12:58:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:23.546 12:58:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:23.804 12:58:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:23.804 12:58:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:23.804 12:58:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:23.804 12:58:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:23.804 12:58:52 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:23.804 12:58:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:24.062 [2024-11-05 12:58:53.124792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:24.063 [2024-11-05 12:58:53.125540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f390 (107): Transport endpoint is not connected 00:44:24.063 [2024-11-05 12:58:53.126533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68f390 (9): Bad file descriptor 00:44:24.063 [2024-11-05 12:58:53.127532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:24.063 [2024-11-05 12:58:53.127550] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:24.063 [2024-11-05 12:58:53.127577] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:24.063 [2024-11-05 12:58:53.127592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:24.063 request: 00:44:24.063 { 00:44:24.063 "name": "nvme0", 00:44:24.063 "trtype": "tcp", 00:44:24.063 "traddr": "127.0.0.1", 00:44:24.063 "adrfam": "ipv4", 00:44:24.063 "trsvcid": "4420", 00:44:24.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:24.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:24.063 "prchk_reftag": false, 00:44:24.063 "prchk_guard": false, 00:44:24.063 "hdgst": false, 00:44:24.063 "ddgst": false, 00:44:24.063 "psk": ":spdk-test:key1", 00:44:24.063 "allow_unrecognized_csi": false, 00:44:24.063 "method": "bdev_nvme_attach_controller", 00:44:24.063 "req_id": 1 00:44:24.063 } 00:44:24.063 Got JSON-RPC error response 00:44:24.063 response: 00:44:24.063 { 00:44:24.063 "code": -5, 00:44:24.063 "message": "Input/output error" 00:44:24.063 } 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@33 -- # sn=267512138 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 267512138 00:44:24.063 1 links removed 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@33 -- # sn=510405351 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 510405351 00:44:24.063 1 links removed 00:44:24.063 12:58:53 keyring_linux -- keyring/linux.sh@41 -- # killprocess 895865 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 895865 ']' 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 895865 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 895865 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 895865' 00:44:24.063 killing process with pid 895865 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@971 -- # kill 895865 00:44:24.063 Received shutdown signal, test time was about 1.000000 seconds 00:44:24.063 00:44:24.063 Latency(us) 00:44:24.063 [2024-11-05T11:58:53.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:24.063 [2024-11-05T11:58:53.301Z] =================================================================================================================== 00:44:24.063 [2024-11-05T11:58:53.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:24.063 12:58:53 keyring_linux -- common/autotest_common.sh@976 -- # wait 895865 00:44:24.320 12:58:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 895855 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 895855 ']' 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 895855 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 895855 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 895855' 00:44:24.320 killing process with pid 895855 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@971 -- # kill 895855 00:44:24.320 12:58:53 keyring_linux -- common/autotest_common.sh@976 -- # wait 895855 00:44:24.580 00:44:24.580 real 0m5.178s 00:44:24.580 user 0m10.295s 00:44:24.580 sys 0m1.662s 00:44:24.580 12:58:53 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:24.580 12:58:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:24.580 ************************************ 00:44:24.580 END TEST keyring_linux 00:44:24.580 ************************************ 00:44:24.839 12:58:53 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:24.839 12:58:53 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:44:24.839 12:58:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:24.839 12:58:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:24.839 12:58:53 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:44:24.839 12:58:53 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:44:24.839 12:58:53 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:44:24.839 12:58:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:24.839 12:58:53 -- common/autotest_common.sh@10 -- # set +x 00:44:24.839 12:58:53 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:44:24.839 12:58:53 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:44:24.839 12:58:53 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:44:24.839 12:58:53 -- common/autotest_common.sh@10 -- # set +x 00:44:26.739 INFO: APP EXITING 00:44:26.739 INFO: killing all VMs 00:44:26.739 INFO: killing vhost app 00:44:26.739 INFO: EXIT DONE 00:44:27.674 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:44:27.674 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:44:27.674 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:44:27.674 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:44:27.674 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:44:27.674 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:44:27.674 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:44:27.674 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:44:27.674 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:44:27.932 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:44:27.932 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:44:27.932 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:44:27.932 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:44:27.932 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:44:27.932 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:44:27.932 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:44:27.932 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:44:29.309 Cleaning 00:44:29.309 Removing: /var/run/dpdk/spdk0/config 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:29.309 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:29.309 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:29.309 Removing: /var/run/dpdk/spdk1/config 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:29.309 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:29.309 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:29.309 Removing: /var/run/dpdk/spdk2/config 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:29.309 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:29.309 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:29.309 Removing: /var/run/dpdk/spdk3/config 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:29.309 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:29.309 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:29.309 Removing: /var/run/dpdk/spdk4/config 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:29.309 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:29.310 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:29.310 Removing: /dev/shm/bdev_svc_trace.1 00:44:29.310 Removing: /dev/shm/nvmf_trace.0 00:44:29.310 Removing: /dev/shm/spdk_tgt_trace.pid513571 00:44:29.310 Removing: /var/run/dpdk/spdk0 00:44:29.310 Removing: /var/run/dpdk/spdk1 00:44:29.310 Removing: /var/run/dpdk/spdk2 00:44:29.310 Removing: /var/run/dpdk/spdk3 00:44:29.310 Removing: /var/run/dpdk/spdk4 00:44:29.310 Removing: /var/run/dpdk/spdk_pid511940 00:44:29.310 Removing: /var/run/dpdk/spdk_pid512677 00:44:29.310 Removing: /var/run/dpdk/spdk_pid513571 00:44:29.310 Removing: /var/run/dpdk/spdk_pid513948 00:44:29.310 Removing: /var/run/dpdk/spdk_pid514636 00:44:29.310 Removing: /var/run/dpdk/spdk_pid514776 00:44:29.310 Removing: /var/run/dpdk/spdk_pid515501 00:44:29.310 Removing: /var/run/dpdk/spdk_pid515507 00:44:29.310 Removing: /var/run/dpdk/spdk_pid515771 00:44:29.310 Removing: /var/run/dpdk/spdk_pid517091 00:44:29.310 Removing: /var/run/dpdk/spdk_pid518007 00:44:29.310 Removing: /var/run/dpdk/spdk_pid518212 00:44:29.310 Removing: /var/run/dpdk/spdk_pid518519 00:44:29.310 Removing: /var/run/dpdk/spdk_pid518735 00:44:29.310 Removing: /var/run/dpdk/spdk_pid518934 00:44:29.310 Removing: /var/run/dpdk/spdk_pid519090 00:44:29.310 Removing: /var/run/dpdk/spdk_pid519242 00:44:29.310 Removing: /var/run/dpdk/spdk_pid519436 00:44:29.310 Removing: /var/run/dpdk/spdk_pid519744 00:44:29.310 Removing: /var/run/dpdk/spdk_pid522236 00:44:29.310 Removing: /var/run/dpdk/spdk_pid522369 00:44:29.310 Removing: /var/run/dpdk/spdk_pid522504 00:44:29.310 Removing: /var/run/dpdk/spdk_pid522575 00:44:29.310 Removing: /var/run/dpdk/spdk_pid522871 00:44:29.310 Removing: /var/run/dpdk/spdk_pid522876 00:44:29.310 Removing: /var/run/dpdk/spdk_pid523310 00:44:29.310 Removing: /var/run/dpdk/spdk_pid523315 00:44:29.310 Removing: /var/run/dpdk/spdk_pid523483 00:44:29.310 Removing: /var/run/dpdk/spdk_pid523613 00:44:29.310 Removing: /var/run/dpdk/spdk_pid523777 00:44:29.310 Removing: /var/run/dpdk/spdk_pid523788 00:44:29.310 Removing: /var/run/dpdk/spdk_pid524290 00:44:29.310 Removing: /var/run/dpdk/spdk_pid524442 00:44:29.310 Removing: /var/run/dpdk/spdk_pid524643 00:44:29.310 Removing: /var/run/dpdk/spdk_pid526856 00:44:29.310 Removing: /var/run/dpdk/spdk_pid529393 00:44:29.310 Removing: /var/run/dpdk/spdk_pid537010 00:44:29.310 Removing: /var/run/dpdk/spdk_pid537535 00:44:29.310 Removing: /var/run/dpdk/spdk_pid539941 00:44:29.310 Removing: /var/run/dpdk/spdk_pid540212 00:44:29.310 Removing: /var/run/dpdk/spdk_pid542732 00:44:29.310 Removing: /var/run/dpdk/spdk_pid546602 00:44:29.310 Removing: /var/run/dpdk/spdk_pid548680 00:44:29.310 Removing: /var/run/dpdk/spdk_pid555084 00:44:29.310 Removing: /var/run/dpdk/spdk_pid560329 00:44:29.310 Removing: /var/run/dpdk/spdk_pid561645 00:44:29.310 Removing: /var/run/dpdk/spdk_pid562313 00:44:29.310 Removing: /var/run/dpdk/spdk_pid573314 00:44:29.310 Removing: /var/run/dpdk/spdk_pid575481 00:44:29.310 Removing: /var/run/dpdk/spdk_pid630367 00:44:29.310 Removing: /var/run/dpdk/spdk_pid634048 00:44:29.310 Removing: /var/run/dpdk/spdk_pid637874 00:44:29.310 Removing: /var/run/dpdk/spdk_pid642132 00:44:29.310 Removing: /var/run/dpdk/spdk_pid642136 00:44:29.310 Removing: /var/run/dpdk/spdk_pid642793 00:44:29.310 Removing: /var/run/dpdk/spdk_pid643329 00:44:29.310 Removing: /var/run/dpdk/spdk_pid643983 00:44:29.310 Removing: /var/run/dpdk/spdk_pid644508 00:44:29.310 Removing: /var/run/dpdk/spdk_pid644514 00:44:29.310 Removing: /var/run/dpdk/spdk_pid644657 00:44:29.568 Removing: /var/run/dpdk/spdk_pid644794 00:44:29.568 Removing: /var/run/dpdk/spdk_pid644796 00:44:29.568 Removing: /var/run/dpdk/spdk_pid645450 00:44:29.568 Removing: /var/run/dpdk/spdk_pid646100 00:44:29.568 Removing: /var/run/dpdk/spdk_pid646647 00:44:29.568 Removing: /var/run/dpdk/spdk_pid647047 00:44:29.568 Removing: /var/run/dpdk/spdk_pid647161 00:44:29.568 Removing: /var/run/dpdk/spdk_pid647306 00:44:29.568 Removing: /var/run/dpdk/spdk_pid648228 00:44:29.568 Removing: /var/run/dpdk/spdk_pid649057 00:44:29.568 Removing: /var/run/dpdk/spdk_pid654269 00:44:29.568 Removing: /var/run/dpdk/spdk_pid681918 00:44:29.568 Removing: /var/run/dpdk/spdk_pid685329 00:44:29.568 Removing: /var/run/dpdk/spdk_pid686505 00:44:29.568 Removing: /var/run/dpdk/spdk_pid687819 00:44:29.568 Removing: /var/run/dpdk/spdk_pid687960 00:44:29.568 Removing: /var/run/dpdk/spdk_pid688101 00:44:29.568 Removing: /var/run/dpdk/spdk_pid688241 00:44:29.568 Removing: /var/run/dpdk/spdk_pid688683 00:44:29.568 Removing: /var/run/dpdk/spdk_pid690017 00:44:29.568 Removing: /var/run/dpdk/spdk_pid690777 00:44:29.568 Removing: /var/run/dpdk/spdk_pid691181 00:44:29.568 Removing: /var/run/dpdk/spdk_pid692794 00:44:29.568 Removing: /var/run/dpdk/spdk_pid693215 00:44:29.568 Removing: /var/run/dpdk/spdk_pid693653 00:44:29.568 Removing: /var/run/dpdk/spdk_pid696045 00:44:29.568 Removing: /var/run/dpdk/spdk_pid699400 00:44:29.568 Removing: /var/run/dpdk/spdk_pid699402 00:44:29.568 Removing: /var/run/dpdk/spdk_pid699404 00:44:29.568 Removing: /var/run/dpdk/spdk_pid701544 00:44:29.568 Removing: /var/run/dpdk/spdk_pid703751 00:44:29.568 Removing: /var/run/dpdk/spdk_pid707271 00:44:29.568 Removing: /var/run/dpdk/spdk_pid730469 00:44:29.568 Removing: /var/run/dpdk/spdk_pid733129 00:44:29.568 Removing: /var/run/dpdk/spdk_pid736892 00:44:29.568 Removing: /var/run/dpdk/spdk_pid737836 00:44:29.568 Removing: /var/run/dpdk/spdk_pid738811 00:44:29.568 Removing: /var/run/dpdk/spdk_pid739918 00:44:29.568 Removing: /var/run/dpdk/spdk_pid743207 00:44:29.568 Removing: /var/run/dpdk/spdk_pid745737 00:44:29.568 Removing: /var/run/dpdk/spdk_pid748103 00:44:29.568 Removing: /var/run/dpdk/spdk_pid752331 00:44:29.568 Removing: /var/run/dpdk/spdk_pid752340 00:44:29.568 Removing: /var/run/dpdk/spdk_pid755231 00:44:29.568 Removing: /var/run/dpdk/spdk_pid755373 00:44:29.568 Removing: /var/run/dpdk/spdk_pid755503 00:44:29.568 Removing: /var/run/dpdk/spdk_pid755769 00:44:29.568 Removing: /var/run/dpdk/spdk_pid755888 00:44:29.568 Removing: /var/run/dpdk/spdk_pid756976 00:44:29.568 Removing: /var/run/dpdk/spdk_pid758159 00:44:29.569 Removing: /var/run/dpdk/spdk_pid759333 00:44:29.569 Removing: /var/run/dpdk/spdk_pid760513 00:44:29.569 Removing: /var/run/dpdk/spdk_pid761688 00:44:29.569 Removing: /var/run/dpdk/spdk_pid762863 00:44:29.569 Removing: /var/run/dpdk/spdk_pid766681 00:44:29.569 Removing: /var/run/dpdk/spdk_pid767130 00:44:29.569 Removing: /var/run/dpdk/spdk_pid768417 00:44:29.569 Removing: /var/run/dpdk/spdk_pid769155 00:44:29.569 Removing: /var/run/dpdk/spdk_pid772875 00:44:29.569 Removing: /var/run/dpdk/spdk_pid775461 00:44:29.569 Removing: /var/run/dpdk/spdk_pid778881 00:44:29.569 Removing: /var/run/dpdk/spdk_pid782339 00:44:29.569 Removing: /var/run/dpdk/spdk_pid788742 00:44:29.569 Removing: /var/run/dpdk/spdk_pid793165 00:44:29.569 Removing: /var/run/dpdk/spdk_pid793167 00:44:29.569 Removing: /var/run/dpdk/spdk_pid805671 00:44:29.569 Removing: /var/run/dpdk/spdk_pid806187 00:44:29.569 Removing: /var/run/dpdk/spdk_pid806612 00:44:29.569 Removing: /var/run/dpdk/spdk_pid807012 00:44:29.569 Removing: /var/run/dpdk/spdk_pid807705 00:44:29.569 Removing: /var/run/dpdk/spdk_pid808426 00:44:29.569 Removing: /var/run/dpdk/spdk_pid809122 00:44:29.569 Removing: /var/run/dpdk/spdk_pid809546 00:44:29.569 Removing: /var/run/dpdk/spdk_pid811970 00:44:29.569 Removing: /var/run/dpdk/spdk_pid812195 00:44:29.569 Removing: /var/run/dpdk/spdk_pid815989 00:44:29.569 Removing: /var/run/dpdk/spdk_pid816099 00:44:29.569 Removing: /var/run/dpdk/spdk_pid819414 00:44:29.569 Removing: /var/run/dpdk/spdk_pid822011 00:44:29.569 Removing: /var/run/dpdk/spdk_pid828800 00:44:29.569 Removing: /var/run/dpdk/spdk_pid829317 00:44:29.569 Removing: /var/run/dpdk/spdk_pid831702 00:44:29.569 Removing: /var/run/dpdk/spdk_pid831974 00:44:29.569 Removing: /var/run/dpdk/spdk_pid834602 00:44:29.569 Removing: /var/run/dpdk/spdk_pid838292 00:44:29.569 Removing: /var/run/dpdk/spdk_pid840333 00:44:29.569 Removing: /var/run/dpdk/spdk_pid847311 00:44:29.569 Removing: /var/run/dpdk/spdk_pid852508 00:44:29.569 Removing: /var/run/dpdk/spdk_pid853705 00:44:29.569 Removing: /var/run/dpdk/spdk_pid854360 00:44:29.569 Removing: /var/run/dpdk/spdk_pid864525 00:44:29.569 Removing: /var/run/dpdk/spdk_pid866774 00:44:29.569 Removing: /var/run/dpdk/spdk_pid868787 00:44:29.569 Removing: /var/run/dpdk/spdk_pid873706 00:44:29.569 Removing: /var/run/dpdk/spdk_pid873828 00:44:29.569 Removing: /var/run/dpdk/spdk_pid876729 00:44:29.569 Removing: /var/run/dpdk/spdk_pid878239 00:44:29.827 Removing: /var/run/dpdk/spdk_pid880143 00:44:29.827 Removing: /var/run/dpdk/spdk_pid880882 00:44:29.827 Removing: /var/run/dpdk/spdk_pid882285 00:44:29.827 Removing: /var/run/dpdk/spdk_pid883157 00:44:29.827 Removing: /var/run/dpdk/spdk_pid888472 00:44:29.827 Removing: /var/run/dpdk/spdk_pid888827 00:44:29.827 Removing: /var/run/dpdk/spdk_pid889218 00:44:29.827 Removing: /var/run/dpdk/spdk_pid890772 00:44:29.827 Removing: /var/run/dpdk/spdk_pid891169 00:44:29.827 Removing: /var/run/dpdk/spdk_pid891451 00:44:29.827 Removing: /var/run/dpdk/spdk_pid893897 00:44:29.827 Removing: /var/run/dpdk/spdk_pid893908 00:44:29.827 Removing: /var/run/dpdk/spdk_pid895368 00:44:29.827 Removing: /var/run/dpdk/spdk_pid895855 00:44:29.827 Removing: /var/run/dpdk/spdk_pid895865 00:44:29.827 Clean 00:44:29.827 12:58:58 -- common/autotest_common.sh@1451 -- # return 0 00:44:29.827 12:58:58 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:44:29.827 12:58:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:29.827 12:58:58 -- common/autotest_common.sh@10 -- # set +x 00:44:29.827 12:58:58 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:44:29.827 12:58:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:29.827 12:58:58 -- common/autotest_common.sh@10 -- # set +x 00:44:29.827 12:58:58 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:29.827 12:58:58 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:29.827 12:58:58 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:29.827 12:58:58 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:44:29.827 12:58:58 -- spdk/autotest.sh@394 -- # hostname 00:44:29.827 12:58:58 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:30.085 geninfo: WARNING: invalid characters removed from testname! 00:45:02.144 12:59:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:05.423 12:59:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:08.701 12:59:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:11.225 12:59:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:14.527 12:59:43 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:17.804 12:59:46 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:20.332 12:59:49 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:20.332 12:59:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:20.332 12:59:49 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:20.332 12:59:49 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:20.332 12:59:49 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:20.332 12:59:49 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:20.332 + [[ -n 419463 ]] 00:45:20.332 + sudo kill 419463 00:45:20.343 [Pipeline] } 00:45:20.358 [Pipeline] // stage 00:45:20.363 [Pipeline] } 00:45:20.377 [Pipeline] // timeout 00:45:20.382 [Pipeline] } 00:45:20.395 [Pipeline] // catchError 00:45:20.400 [Pipeline] } 00:45:20.415 [Pipeline] // wrap 00:45:20.421 [Pipeline] } 00:45:20.433 [Pipeline] // catchError 00:45:20.442 [Pipeline] stage 00:45:20.444 [Pipeline] { (Epilogue) 00:45:20.457 [Pipeline] catchError 00:45:20.458 [Pipeline] { 00:45:20.470 [Pipeline] echo 00:45:20.472 Cleanup processes 00:45:20.477 [Pipeline] sh 00:45:20.765 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:20.765 908045 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:20.779 [Pipeline] sh 00:45:21.091 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:21.091 ++ grep -v 'sudo pgrep' 00:45:21.091 ++ awk '{print $1}' 00:45:21.091 + sudo kill -9 00:45:21.091 + true 00:45:21.108 [Pipeline] sh 00:45:21.391 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:33.628 [Pipeline] sh 00:45:33.916 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:33.916 Artifacts sizes are good 00:45:33.930 [Pipeline] archiveArtifacts 00:45:33.936 Archiving artifacts 00:45:34.122 [Pipeline] sh 00:45:34.407 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:34.421 [Pipeline] cleanWs 00:45:34.429 [WS-CLEANUP] Deleting project workspace... 00:45:34.429 [WS-CLEANUP] Deferred wipeout is used... 00:45:34.435 [WS-CLEANUP] done 00:45:34.436 [Pipeline] } 00:45:34.448 [Pipeline] // catchError 00:45:34.457 [Pipeline] sh 00:45:34.734 + logger -p user.info -t JENKINS-CI 00:45:34.742 [Pipeline] } 00:45:34.756 [Pipeline] // stage 00:45:34.760 [Pipeline] } 00:45:34.773 [Pipeline] // node 00:45:34.778 [Pipeline] End of Pipeline 00:45:34.812 Finished: SUCCESS